00:00:00.001 Started by upstream project "autotest-nightly" build number 4280 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3643 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.166 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.215 Fetching changes from the remote Git repository 00:00:00.217 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.258 Using shallow fetch with depth 1 00:00:00.258 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.258 > git --version # timeout=10 00:00:00.297 > git --version # 'git version 2.39.2' 00:00:00.297 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.146 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.160 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.173 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.173 > git config core.sparsecheckout # timeout=10 00:00:06.185 > git read-tree -mu HEAD # timeout=10 00:00:06.201 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.220 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.220 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.322 [Pipeline] Start of Pipeline 00:00:06.336 [Pipeline] library 00:00:06.338 Loading library shm_lib@master 00:00:06.339 Library shm_lib@master is cached. Copying from home. 00:00:06.361 [Pipeline] node 00:00:06.377 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.378 [Pipeline] { 00:00:06.387 [Pipeline] catchError 00:00:06.388 [Pipeline] { 00:00:06.398 [Pipeline] wrap 00:00:06.405 [Pipeline] { 00:00:06.416 [Pipeline] stage 00:00:06.418 [Pipeline] { (Prologue) 00:00:06.635 [Pipeline] sh 00:00:06.911 + logger -p user.info -t JENKINS-CI 00:00:06.927 [Pipeline] echo 00:00:06.928 Node: GP11 00:00:06.935 [Pipeline] sh 00:00:07.230 [Pipeline] setCustomBuildProperty 00:00:07.241 [Pipeline] echo 00:00:07.242 Cleanup processes 00:00:07.246 [Pipeline] sh 00:00:07.525 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.525 2739163 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.537 [Pipeline] sh 00:00:07.821 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.821 ++ grep -v 'sudo pgrep' 00:00:07.821 ++ awk '{print $1}' 00:00:07.821 + sudo kill -9 00:00:07.821 + true 00:00:07.832 [Pipeline] cleanWs 00:00:07.839 [WS-CLEANUP] Deleting project workspace... 00:00:07.839 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.845 [WS-CLEANUP] done 00:00:07.847 [Pipeline] setCustomBuildProperty 00:00:07.857 [Pipeline] sh 00:00:08.130 + sudo git config --global --replace-all safe.directory '*' 00:00:08.198 [Pipeline] httpRequest 00:00:08.613 [Pipeline] echo 00:00:08.615 Sorcerer 10.211.164.20 is alive 00:00:08.622 [Pipeline] retry 00:00:08.624 [Pipeline] { 00:00:08.632 [Pipeline] httpRequest 00:00:08.636 HttpMethod: GET 00:00:08.636 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.636 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.653 Response Code: HTTP/1.1 200 OK 00:00:08.653 Success: Status code 200 is in the accepted range: 200,404 00:00:08.654 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.873 [Pipeline] } 00:00:27.890 [Pipeline] // retry 00:00:27.898 [Pipeline] sh 00:00:28.179 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.195 [Pipeline] httpRequest 00:00:28.593 [Pipeline] echo 00:00:28.594 Sorcerer 10.211.164.20 is alive 00:00:28.603 [Pipeline] retry 00:00:28.605 [Pipeline] { 00:00:28.619 [Pipeline] httpRequest 00:00:28.623 HttpMethod: GET 00:00:28.623 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:28.624 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:28.632 Response Code: HTTP/1.1 200 OK 00:00:28.632 Success: Status code 200 is in the accepted range: 200,404 00:00:28.632 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:31.204 [Pipeline] } 00:01:31.223 [Pipeline] // retry 00:01:31.229 [Pipeline] sh 00:01:31.513 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:34.055 [Pipeline] sh 00:01:34.334 + git -C spdk log --oneline -n5 00:01:34.334 d47eb51c9 bdev: fix a race between reset start and complete 00:01:34.334 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:34.334 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:34.334 4bcab9fb9 correct kick for CQ full case 00:01:34.334 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:34.344 [Pipeline] } 00:01:34.358 [Pipeline] // stage 00:01:34.367 [Pipeline] stage 00:01:34.369 [Pipeline] { (Prepare) 00:01:34.385 [Pipeline] writeFile 00:01:34.399 [Pipeline] sh 00:01:34.680 + logger -p user.info -t JENKINS-CI 00:01:34.692 [Pipeline] sh 00:01:34.974 + logger -p user.info -t JENKINS-CI 00:01:34.985 [Pipeline] sh 00:01:35.265 + cat autorun-spdk.conf 00:01:35.265 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.265 SPDK_TEST_NVMF=1 00:01:35.265 SPDK_TEST_NVME_CLI=1 00:01:35.265 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.265 SPDK_TEST_NVMF_NICS=e810 00:01:35.265 SPDK_RUN_ASAN=1 00:01:35.265 SPDK_RUN_UBSAN=1 00:01:35.265 NET_TYPE=phy 00:01:35.272 RUN_NIGHTLY=1 00:01:35.278 [Pipeline] readFile 00:01:35.302 [Pipeline] withEnv 00:01:35.305 [Pipeline] { 00:01:35.319 [Pipeline] sh 00:01:35.605 + set -ex 00:01:35.605 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.605 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.605 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.605 ++ SPDK_TEST_NVMF=1 00:01:35.605 ++ SPDK_TEST_NVME_CLI=1 00:01:35.605 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.605 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.605 ++ SPDK_RUN_ASAN=1 00:01:35.605 ++ SPDK_RUN_UBSAN=1 00:01:35.605 ++ NET_TYPE=phy 00:01:35.605 ++ RUN_NIGHTLY=1 00:01:35.605 + case $SPDK_TEST_NVMF_NICS in 00:01:35.605 + DRIVERS=ice 00:01:35.605 + [[ tcp == \r\d\m\a ]] 00:01:35.605 + [[ -n ice ]] 00:01:35.605 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.605 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.605 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:35.605 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.605 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.605 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.605 + true 00:01:35.605 + for D in $DRIVERS 00:01:35.605 + sudo modprobe ice 00:01:35.605 + exit 0 00:01:35.615 [Pipeline] } 00:01:35.630 [Pipeline] // withEnv 00:01:35.634 [Pipeline] } 00:01:35.648 [Pipeline] // stage 00:01:35.657 [Pipeline] catchError 00:01:35.659 [Pipeline] { 00:01:35.673 [Pipeline] timeout 00:01:35.673 Timeout set to expire in 1 hr 0 min 00:01:35.675 [Pipeline] { 00:01:35.688 [Pipeline] stage 00:01:35.691 [Pipeline] { (Tests) 00:01:35.705 [Pipeline] sh 00:01:35.990 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.990 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.990 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.990 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.990 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.990 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.990 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.990 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.990 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.990 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.990 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.990 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.990 + source /etc/os-release 00:01:35.990 ++ NAME='Fedora Linux' 00:01:35.990 ++ VERSION='39 (Cloud Edition)' 00:01:35.990 ++ ID=fedora 00:01:35.990 ++ VERSION_ID=39 00:01:35.990 ++ VERSION_CODENAME= 00:01:35.990 ++ PLATFORM_ID=platform:f39 00:01:35.990 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.990 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.990 ++ LOGO=fedora-logo-icon 00:01:35.990 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.990 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.990 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.990 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.990 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.990 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.990 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.990 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.990 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.990 ++ SUPPORT_END=2024-11-12 00:01:35.990 ++ VARIANT='Cloud Edition' 00:01:35.990 ++ VARIANT_ID=cloud 00:01:35.990 + uname -a 00:01:35.990 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.990 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:36.925 Hugepages 00:01:36.925 node hugesize free / total 00:01:36.925 node0 1048576kB 0 / 0 00:01:36.925 node0 2048kB 0 / 0 00:01:36.925 node1 1048576kB 0 / 0 00:01:36.925 node1 2048kB 0 / 0 00:01:36.925 00:01:36.925 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.925 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:36.925 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:36.925 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:36.925 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:36.925 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:37.184 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:37.184 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:37.184 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:37.184 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:37.184 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:37.184 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:37.184 + rm -f /tmp/spdk-ld-path 00:01:37.184 + source autorun-spdk.conf 00:01:37.184 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.184 ++ SPDK_TEST_NVMF=1 00:01:37.184 ++ SPDK_TEST_NVME_CLI=1 00:01:37.184 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.184 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.184 ++ SPDK_RUN_ASAN=1 00:01:37.184 ++ SPDK_RUN_UBSAN=1 00:01:37.184 ++ NET_TYPE=phy 00:01:37.184 ++ RUN_NIGHTLY=1 00:01:37.184 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.184 + [[ -n '' ]] 00:01:37.184 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.184 + for M in /var/spdk/build-*-manifest.txt 00:01:37.184 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.184 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.184 + for M in /var/spdk/build-*-manifest.txt 00:01:37.184 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.184 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.184 + for M in /var/spdk/build-*-manifest.txt 00:01:37.184 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.184 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.184 ++ uname 00:01:37.184 + [[ Linux == \L\i\n\u\x ]] 00:01:37.184 + sudo dmesg -T 00:01:37.184 + sudo dmesg --clear 00:01:37.184 + dmesg_pid=2739843 00:01:37.184 + [[ Fedora Linux == FreeBSD ]] 00:01:37.184 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.184 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.184 + sudo dmesg -Tw 00:01:37.184 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.184 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.184 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.185 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.185 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.185 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.185 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.185 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.185 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.185 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.185 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.185 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.185 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.185 18:08:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.185 18:08:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.185 18:08:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:37.185 18:08:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.185 18:08:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.185 18:08:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.185 18:08:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.185 18:08:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.185 18:08:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.185 18:08:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.185 18:08:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.185 18:08:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.185 18:08:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.185 18:08:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.185 18:08:35 -- paths/export.sh@5 -- $ export PATH 00:01:37.185 18:08:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.185 18:08:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.185 18:08:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:37.185 18:08:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731949715.XXXXXX 00:01:37.445 18:08:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731949715.Irof6w 00:01:37.445 18:08:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:37.445 18:08:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:37.445 18:08:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:37.445 18:08:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.445 18:08:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.445 18:08:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:37.445 18:08:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.445 18:08:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.445 18:08:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:37.445 18:08:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:37.445 18:08:35 -- pm/common@17 -- $ local monitor 00:01:37.445 18:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.445 18:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.445 18:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.445 18:08:35 -- pm/common@21 -- $ date +%s 00:01:37.445 18:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.445 18:08:35 -- pm/common@21 -- $ date +%s 00:01:37.445 18:08:35 -- pm/common@25 -- $ sleep 1 00:01:37.445 18:08:35 -- pm/common@21 -- $ date +%s 00:01:37.445 18:08:35 -- pm/common@21 -- $ date +%s 00:01:37.445 18:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731949715 00:01:37.445 18:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731949715 00:01:37.445 18:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731949715 00:01:37.445 18:08:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731949715 00:01:37.445 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731949715_collect-vmstat.pm.log 00:01:37.445 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731949715_collect-cpu-load.pm.log 00:01:37.445 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731949715_collect-cpu-temp.pm.log 00:01:37.445 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731949715_collect-bmc-pm.bmc.pm.log 00:01:38.379 18:08:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:38.379 18:08:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.379 18:08:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.379 18:08:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.379 18:08:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.379 Mon Nov 18 05:08:36 PM UTC 2024 00:01:38.379 18:08:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.379 v25.01-pre-190-gd47eb51c9 00:01:38.379 18:08:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:38.379 18:08:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:38.379 18:08:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.379 18:08:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.379 18:08:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.379 ************************************ 00:01:38.379 START TEST asan 00:01:38.379 ************************************ 00:01:38.379 18:08:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:38.379 using asan 00:01:38.379 00:01:38.379 real 0m0.000s 00:01:38.379 user 0m0.000s 00:01:38.379 sys 0m0.000s 00:01:38.379 18:08:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.379 18:08:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.379 ************************************ 00:01:38.379 END TEST asan 00:01:38.379 ************************************ 00:01:38.379 18:08:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.379 18:08:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.379 18:08:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.379 18:08:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.379 18:08:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.379 ************************************ 00:01:38.379 START TEST ubsan 00:01:38.379 ************************************ 00:01:38.379 18:08:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.379 using ubsan 00:01:38.379 00:01:38.379 real 0m0.000s 00:01:38.379 user 0m0.000s 00:01:38.379 sys 0m0.000s 00:01:38.379 18:08:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.379 18:08:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.379 ************************************ 00:01:38.379 END TEST ubsan 00:01:38.379 ************************************ 00:01:38.379 18:08:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.379 18:08:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.379 18:08:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.379 18:08:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:38.379 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:38.379 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.944 Using 'verbs' RDMA provider 00:01:49.484 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:59.516 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:59.516 Creating mk/config.mk...done. 00:01:59.516 Creating mk/cc.flags.mk...done. 00:01:59.516 Type 'make' to build. 00:01:59.516 18:08:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:59.516 18:08:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:59.516 18:08:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:59.516 18:08:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.516 ************************************ 00:01:59.516 START TEST make 00:01:59.516 ************************************ 00:01:59.516 18:08:57 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:59.516 make[1]: Nothing to be done for 'all'. 00:02:09.518 The Meson build system 00:02:09.518 Version: 1.5.0 00:02:09.518 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:09.518 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:09.518 Build type: native build 00:02:09.518 Program cat found: YES (/usr/bin/cat) 00:02:09.518 Project name: DPDK 00:02:09.518 Project version: 24.03.0 00:02:09.518 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.518 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.518 Host machine cpu family: x86_64 00:02:09.518 Host machine cpu: x86_64 00:02:09.518 Message: ## Building in Developer Mode ## 00:02:09.518 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.518 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.518 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.518 Program python3 found: YES (/usr/bin/python3) 00:02:09.518 Program cat found: YES (/usr/bin/cat) 00:02:09.518 Compiler for C supports arguments -march=native: YES 00:02:09.518 Checking for size of "void *" : 8 00:02:09.518 Checking for size of "void *" : 8 (cached) 00:02:09.518 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:09.518 Library m found: YES 00:02:09.518 Library numa found: YES 00:02:09.518 Has header "numaif.h" : YES 00:02:09.518 Library fdt found: NO 00:02:09.518 Library execinfo found: NO 00:02:09.518 Has header "execinfo.h" : YES 00:02:09.518 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.518 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.518 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.518 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.518 Run-time dependency openssl found: YES 3.1.1 00:02:09.518 Run-time dependency libpcap found: YES 1.10.4 00:02:09.518 Has header "pcap.h" with dependency libpcap: YES 00:02:09.518 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.518 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.518 Compiler for C supports arguments -Wformat: YES 00:02:09.518 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.518 Compiler for C supports arguments -Wformat-security: NO 00:02:09.518 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.518 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.518 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.518 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.518 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.518 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.518 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.518 Compiler for C supports arguments -Wundef: YES 00:02:09.518 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.518 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.518 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.518 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.518 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.518 Program objdump found: YES (/usr/bin/objdump) 00:02:09.518 Compiler for C supports arguments -mavx512f: YES 00:02:09.518 Checking if "AVX512 checking" compiles: YES 00:02:09.518 Fetching value of define "__SSE4_2__" : 1 00:02:09.518 Fetching value of define "__AES__" : 1 00:02:09.518 Fetching value of define "__AVX__" : 1 00:02:09.518 Fetching value of define "__AVX2__" : (undefined) 00:02:09.518 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.518 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.518 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.518 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.518 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.518 Fetching value of define "__PCLMUL__" : 1 00:02:09.518 Fetching value of define "__RDRND__" : 1 00:02:09.518 Fetching value of define "__RDSEED__" : (undefined) 00:02:09.518 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.518 Fetching value of define "__znver1__" : (undefined) 00:02:09.518 Fetching value of define "__znver2__" : (undefined) 00:02:09.518 Fetching value of define "__znver3__" : (undefined) 00:02:09.519 Fetching value of define "__znver4__" : (undefined) 00:02:09.519 Library asan found: YES 00:02:09.519 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.519 Message: lib/log: Defining dependency "log" 00:02:09.519 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.519 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.519 Library rt found: YES 00:02:09.519 Checking for function "getentropy" : NO 00:02:09.519 Message: lib/eal: Defining dependency "eal" 00:02:09.519 Message: lib/ring: Defining dependency "ring" 00:02:09.519 Message: lib/rcu: Defining dependency "rcu" 00:02:09.519 Message: lib/mempool: Defining dependency "mempool" 00:02:09.519 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.519 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.519 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.519 Compiler for C supports arguments -mpclmul: YES 00:02:09.519 Compiler for C supports arguments -maes: YES 00:02:09.519 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.519 Compiler for C supports arguments -mavx512bw: YES 00:02:09.519 Compiler for C supports arguments -mavx512dq: YES 00:02:09.519 Compiler for C supports arguments -mavx512vl: YES 00:02:09.519 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.519 Compiler for C supports arguments -mavx2: YES 00:02:09.519 Compiler for C supports arguments -mavx: YES 00:02:09.519 Message: lib/net: Defining dependency "net" 00:02:09.519 Message: lib/meter: Defining dependency "meter" 00:02:09.519 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.519 Message: lib/pci: Defining dependency "pci" 00:02:09.519 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.519 Message: lib/hash: Defining dependency "hash" 00:02:09.519 Message: lib/timer: Defining dependency "timer" 00:02:09.519 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.519 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.519 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.519 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.519 Message: lib/power: Defining dependency "power" 00:02:09.519 Message: lib/reorder: Defining dependency "reorder" 00:02:09.519 Message: lib/security: Defining dependency "security" 00:02:09.519 Has header "linux/userfaultfd.h" : YES 00:02:09.519 Has header "linux/vduse.h" : YES 00:02:09.519 Message: lib/vhost: Defining dependency "vhost" 00:02:09.519 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.519 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.519 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.519 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.519 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.519 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.519 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.519 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.519 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.519 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.519 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.519 Configuring doxy-api-html.conf using configuration 00:02:09.519 Configuring doxy-api-man.conf using configuration 00:02:09.519 Program mandb found: YES (/usr/bin/mandb) 00:02:09.519 Program sphinx-build found: NO 00:02:09.519 Configuring rte_build_config.h using configuration 00:02:09.519 Message: 00:02:09.519 ================= 00:02:09.519 Applications Enabled 00:02:09.519 ================= 00:02:09.519 00:02:09.519 apps: 00:02:09.519 00:02:09.519 00:02:09.519 Message: 00:02:09.519 ================= 00:02:09.519 Libraries Enabled 00:02:09.519 ================= 00:02:09.519 00:02:09.519 libs: 00:02:09.519 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.519 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.519 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.519 00:02:09.519 Message: 00:02:09.519 =============== 00:02:09.519 Drivers Enabled 00:02:09.519 =============== 00:02:09.519 00:02:09.519 common: 00:02:09.519 00:02:09.519 bus: 00:02:09.519 pci, vdev, 00:02:09.519 mempool: 00:02:09.519 ring, 00:02:09.519 dma: 00:02:09.519 00:02:09.519 net: 00:02:09.519 00:02:09.519 crypto: 00:02:09.519 00:02:09.519 compress: 00:02:09.519 00:02:09.519 vdpa: 00:02:09.519 00:02:09.519 00:02:09.519 Message: 00:02:09.519 ================= 00:02:09.519 Content Skipped 00:02:09.519 ================= 00:02:09.519 00:02:09.519 apps: 00:02:09.519 dumpcap: explicitly disabled via build config 00:02:09.519 graph: explicitly disabled via build config 00:02:09.519 pdump: explicitly disabled via build config 00:02:09.519 proc-info: explicitly disabled via build config 00:02:09.519 test-acl: explicitly disabled via build config 00:02:09.519 test-bbdev: explicitly disabled via build config 00:02:09.519 test-cmdline: explicitly disabled via build config 00:02:09.519 test-compress-perf: explicitly disabled via build config 00:02:09.519 test-crypto-perf: explicitly disabled via build config 00:02:09.519 test-dma-perf: explicitly disabled via build config 00:02:09.519 test-eventdev: explicitly disabled via build config 00:02:09.519 test-fib: explicitly disabled via build config 00:02:09.519 test-flow-perf: explicitly disabled via build config 00:02:09.519 test-gpudev: explicitly disabled via build config 00:02:09.519 test-mldev: explicitly disabled via build config 00:02:09.519 test-pipeline: explicitly disabled via build config 00:02:09.519 test-pmd: explicitly disabled via build config 00:02:09.519 test-regex: explicitly disabled via build config 00:02:09.519 test-sad: explicitly disabled via build config 00:02:09.519 test-security-perf: explicitly disabled via build config 00:02:09.519 00:02:09.519 libs: 00:02:09.519 argparse: explicitly disabled via build config 00:02:09.519 metrics: explicitly disabled via build config 00:02:09.519 acl: explicitly disabled via build config 00:02:09.519 bbdev: explicitly disabled via build config 00:02:09.519 bitratestats: explicitly disabled via build config 00:02:09.519 bpf: explicitly disabled via build config 00:02:09.519 cfgfile: explicitly disabled via build config 00:02:09.519 distributor: explicitly disabled via build config 00:02:09.519 efd: explicitly disabled via build config 00:02:09.519 eventdev: explicitly disabled via build config 00:02:09.519 dispatcher: explicitly disabled via build config 00:02:09.519 gpudev: explicitly disabled via build config 00:02:09.519 gro: explicitly disabled via build config 00:02:09.519 gso: explicitly disabled via build config 00:02:09.519 ip_frag: explicitly disabled via build config 00:02:09.519 jobstats: explicitly disabled via build config 00:02:09.519 latencystats: explicitly disabled via build config 00:02:09.519 lpm: explicitly disabled via build config 00:02:09.519 member: explicitly disabled via build config 00:02:09.519 pcapng: explicitly disabled via build config 00:02:09.519 rawdev: explicitly disabled via build config 00:02:09.519 regexdev: explicitly disabled via build config 00:02:09.519 mldev: explicitly disabled via build config 00:02:09.519 rib: explicitly disabled via build config 00:02:09.519 sched: explicitly disabled via build config 00:02:09.519 stack: explicitly disabled via build config 00:02:09.519 ipsec: explicitly disabled via build config 00:02:09.519 pdcp: explicitly disabled via build config 00:02:09.519 fib: explicitly disabled via build config 00:02:09.519 port: explicitly disabled via build config 00:02:09.519 pdump: explicitly disabled via build config 00:02:09.519 table: explicitly disabled via build config 00:02:09.519 pipeline: explicitly disabled via build config 00:02:09.519 graph: explicitly disabled via build config 00:02:09.519 node: explicitly disabled via build config 00:02:09.519 00:02:09.519 drivers: 00:02:09.519 common/cpt: not in enabled drivers build config 00:02:09.519 common/dpaax: not in enabled drivers build config 00:02:09.519 common/iavf: not in enabled drivers build config 00:02:09.519 common/idpf: not in enabled drivers build config 00:02:09.519 common/ionic: not in enabled drivers build config 00:02:09.519 common/mvep: not in enabled drivers build config 00:02:09.519 common/octeontx: not in enabled drivers build config 00:02:09.519 bus/auxiliary: not in enabled drivers build config 00:02:09.519 bus/cdx: not in enabled drivers build config 00:02:09.519 bus/dpaa: not in enabled drivers build config 00:02:09.519 bus/fslmc: not in enabled drivers build config 00:02:09.519 bus/ifpga: not in enabled drivers build config 00:02:09.519 bus/platform: not in enabled drivers build config 00:02:09.519 bus/uacce: not in enabled drivers build config 00:02:09.519 bus/vmbus: not in enabled drivers build config 00:02:09.519 common/cnxk: not in enabled drivers build config 00:02:09.519 common/mlx5: not in enabled drivers build config 00:02:09.519 common/nfp: not in enabled drivers build config 00:02:09.519 common/nitrox: not in enabled drivers build config 00:02:09.519 common/qat: not in enabled drivers build config 00:02:09.519 common/sfc_efx: not in enabled drivers build config 00:02:09.519 mempool/bucket: not in enabled drivers build config 00:02:09.519 mempool/cnxk: not in enabled drivers build config 00:02:09.519 mempool/dpaa: not in enabled drivers build config 00:02:09.519 mempool/dpaa2: not in enabled drivers build config 00:02:09.519 mempool/octeontx: not in enabled drivers build config 00:02:09.519 mempool/stack: not in enabled drivers build config 00:02:09.519 dma/cnxk: not in enabled drivers build config 00:02:09.519 dma/dpaa: not in enabled drivers build config 00:02:09.519 dma/dpaa2: not in enabled drivers build config 00:02:09.519 dma/hisilicon: not in enabled drivers build config 00:02:09.519 dma/idxd: not in enabled drivers build config 00:02:09.519 dma/ioat: not in enabled drivers build config 00:02:09.519 dma/skeleton: not in enabled drivers build config 00:02:09.519 net/af_packet: not in enabled drivers build config 00:02:09.519 net/af_xdp: not in enabled drivers build config 00:02:09.519 net/ark: not in enabled drivers build config 00:02:09.519 net/atlantic: not in enabled drivers build config 00:02:09.519 net/avp: not in enabled drivers build config 00:02:09.519 net/axgbe: not in enabled drivers build config 00:02:09.519 net/bnx2x: not in enabled drivers build config 00:02:09.519 net/bnxt: not in enabled drivers build config 00:02:09.519 net/bonding: not in enabled drivers build config 00:02:09.519 net/cnxk: not in enabled drivers build config 00:02:09.519 net/cpfl: not in enabled drivers build config 00:02:09.519 net/cxgbe: not in enabled drivers build config 00:02:09.519 net/dpaa: not in enabled drivers build config 00:02:09.519 net/dpaa2: not in enabled drivers build config 00:02:09.519 net/e1000: not in enabled drivers build config 00:02:09.519 net/ena: not in enabled drivers build config 00:02:09.519 net/enetc: not in enabled drivers build config 00:02:09.519 net/enetfec: not in enabled drivers build config 00:02:09.519 net/enic: not in enabled drivers build config 00:02:09.519 net/failsafe: not in enabled drivers build config 00:02:09.519 net/fm10k: not in enabled drivers build config 00:02:09.519 net/gve: not in enabled drivers build config 00:02:09.519 net/hinic: not in enabled drivers build config 00:02:09.519 net/hns3: not in enabled drivers build config 00:02:09.519 net/i40e: not in enabled drivers build config 00:02:09.519 net/iavf: not in enabled drivers build config 00:02:09.519 net/ice: not in enabled drivers build config 00:02:09.519 net/idpf: not in enabled drivers build config 00:02:09.519 net/igc: not in enabled drivers build config 00:02:09.519 net/ionic: not in enabled drivers build config 00:02:09.519 net/ipn3ke: not in enabled drivers build config 00:02:09.519 net/ixgbe: not in enabled drivers build config 00:02:09.519 net/mana: not in enabled drivers build config 00:02:09.519 net/memif: not in enabled drivers build config 00:02:09.519 net/mlx4: not in enabled drivers build config 00:02:09.519 net/mlx5: not in enabled drivers build config 00:02:09.519 net/mvneta: not in enabled drivers build config 00:02:09.519 net/mvpp2: not in enabled drivers build config 00:02:09.519 net/netvsc: not in enabled drivers build config 00:02:09.519 net/nfb: not in enabled drivers build config 00:02:09.519 net/nfp: not in enabled drivers build config 00:02:09.519 net/ngbe: not in enabled drivers build config 00:02:09.519 net/null: not in enabled drivers build config 00:02:09.519 net/octeontx: not in enabled drivers build config 00:02:09.519 net/octeon_ep: not in enabled drivers build config 00:02:09.519 net/pcap: not in enabled drivers build config 00:02:09.519 net/pfe: not in enabled drivers build config 00:02:09.519 net/qede: not in enabled drivers build config 00:02:09.519 net/ring: not in enabled drivers build config 00:02:09.519 net/sfc: not in enabled drivers build config 00:02:09.519 net/softnic: not in enabled drivers build config 00:02:09.519 net/tap: not in enabled drivers build config 00:02:09.519 net/thunderx: not in enabled drivers build config 00:02:09.519 net/txgbe: not in enabled drivers build config 00:02:09.519 net/vdev_netvsc: not in enabled drivers build config 00:02:09.519 net/vhost: not in enabled drivers build config 00:02:09.519 net/virtio: not in enabled drivers build config 00:02:09.519 net/vmxnet3: not in enabled drivers build config 00:02:09.519 raw/*: missing internal dependency, "rawdev" 00:02:09.519 crypto/armv8: not in enabled drivers build config 00:02:09.519 crypto/bcmfs: not in enabled drivers build config 00:02:09.519 crypto/caam_jr: not in enabled drivers build config 00:02:09.519 crypto/ccp: not in enabled drivers build config 00:02:09.519 crypto/cnxk: not in enabled drivers build config 00:02:09.519 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.519 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.519 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.519 crypto/mlx5: not in enabled drivers build config 00:02:09.519 crypto/mvsam: not in enabled drivers build config 00:02:09.519 crypto/nitrox: not in enabled drivers build config 00:02:09.519 crypto/null: not in enabled drivers build config 00:02:09.519 crypto/octeontx: not in enabled drivers build config 00:02:09.519 crypto/openssl: not in enabled drivers build config 00:02:09.519 crypto/scheduler: not in enabled drivers build config 00:02:09.519 crypto/uadk: not in enabled drivers build config 00:02:09.519 crypto/virtio: not in enabled drivers build config 00:02:09.519 compress/isal: not in enabled drivers build config 00:02:09.519 compress/mlx5: not in enabled drivers build config 00:02:09.519 compress/nitrox: not in enabled drivers build config 00:02:09.519 compress/octeontx: not in enabled drivers build config 00:02:09.519 compress/zlib: not in enabled drivers build config 00:02:09.519 regex/*: missing internal dependency, "regexdev" 00:02:09.519 ml/*: missing internal dependency, "mldev" 00:02:09.519 vdpa/ifc: not in enabled drivers build config 00:02:09.519 vdpa/mlx5: not in enabled drivers build config 00:02:09.519 vdpa/nfp: not in enabled drivers build config 00:02:09.519 vdpa/sfc: not in enabled drivers build config 00:02:09.519 event/*: missing internal dependency, "eventdev" 00:02:09.519 baseband/*: missing internal dependency, "bbdev" 00:02:09.519 gpu/*: missing internal dependency, "gpudev" 00:02:09.519 00:02:09.519 00:02:09.519 Build targets in project: 85 00:02:09.519 00:02:09.519 DPDK 24.03.0 00:02:09.519 00:02:09.519 User defined options 00:02:09.519 buildtype : debug 00:02:09.519 default_library : shared 00:02:09.519 libdir : lib 00:02:09.519 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:09.519 b_sanitize : address 00:02:09.519 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.519 c_link_args : 00:02:09.519 cpu_instruction_set: native 00:02:09.519 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:09.519 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:09.519 enable_docs : false 00:02:09.519 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:09.519 enable_kmods : false 00:02:09.519 max_lcores : 128 00:02:09.519 tests : false 00:02:09.519 00:02:09.519 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.519 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:09.519 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.519 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.519 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.519 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.519 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.519 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.519 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.519 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.519 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.519 [10/268] Linking static target lib/librte_kvargs.a 00:02:09.519 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.519 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.519 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.519 [14/268] Linking static target lib/librte_log.a 00:02:09.519 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.519 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.091 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.091 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.091 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.091 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.091 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.091 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.091 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.091 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.091 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.091 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.091 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.091 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.091 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.091 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.091 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.091 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.091 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.354 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.354 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.354 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.354 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.354 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.354 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.354 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.354 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.354 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.354 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.354 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.354 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.354 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.354 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.354 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.354 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.354 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.354 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.354 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.354 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.354 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.354 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.354 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.354 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.354 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.615 [59/268] Linking static target lib/librte_telemetry.a 00:02:10.615 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.615 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.615 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.615 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.615 [64/268] Linking target lib/librte_log.so.24.1 00:02:10.615 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.615 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.875 [67/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.875 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.135 [69/268] Linking target lib/librte_kvargs.so.24.1 00:02:11.135 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.135 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.135 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.135 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.135 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.135 [75/268] Linking static target lib/librte_pci.a 00:02:11.135 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.135 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.135 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.135 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.135 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.135 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.399 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.399 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.399 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.399 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.399 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.399 [87/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.399 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.399 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.399 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.399 [91/268] Linking static target lib/librte_meter.a 00:02:11.399 [92/268] Linking static target lib/librte_ring.a 00:02:11.399 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.399 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.399 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.399 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.399 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.399 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.399 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.399 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.399 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.399 [102/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:11.399 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.399 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.399 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.399 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.659 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.659 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.659 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.659 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.659 [111/268] Linking target lib/librte_telemetry.so.24.1 00:02:11.659 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.659 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.659 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.659 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.659 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.659 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.659 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.659 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.659 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.930 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.931 [122/268] Linking static target lib/librte_rcu.a 00:02:11.931 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.931 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.931 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.931 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.931 [127/268] Linking static target lib/librte_mempool.a 00:02:11.931 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.931 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.931 [130/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.931 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:11.931 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.931 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.190 [134/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.190 [135/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.190 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.190 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.190 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.190 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.190 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.190 [141/268] Linking static target lib/librte_cmdline.a 00:02:12.449 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.449 [143/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.449 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.449 [145/268] Linking static target lib/librte_eal.a 00:02:12.449 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.450 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.450 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.450 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.450 [150/268] Linking static target lib/librte_timer.a 00:02:12.450 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.450 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.450 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.450 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.709 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.709 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.709 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.709 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.709 [159/268] Linking static target lib/librte_dmadev.a 00:02:12.967 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.967 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.967 [162/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.967 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.967 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.967 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.967 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.225 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.225 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.225 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.225 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.225 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.225 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.225 [173/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.225 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.225 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.225 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.225 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.225 [178/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.225 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.225 [180/268] Linking static target lib/librte_net.a 00:02:13.225 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.225 [182/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.483 [183/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.483 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.483 [185/268] Linking static target lib/librte_power.a 00:02:13.483 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.483 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.483 [188/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.483 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.483 [190/268] Linking static target drivers/librte_bus_vdev.a 00:02:13.483 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.483 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.483 [193/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.741 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.741 [195/268] Linking static target lib/librte_hash.a 00:02:13.741 [196/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.741 [197/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.741 [198/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.741 [199/268] Linking static target drivers/librte_bus_pci.a 00:02:13.741 [200/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.741 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.741 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.741 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.741 [204/268] Linking static target drivers/librte_mempool_ring.a 00:02:13.741 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.741 [206/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.741 [207/268] Linking static target lib/librte_compressdev.a 00:02:13.741 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.000 [209/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.000 [210/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.000 [211/268] Linking static target lib/librte_reorder.a 00:02:14.258 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.258 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.258 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.258 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.824 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.824 [217/268] Linking static target lib/librte_security.a 00:02:15.083 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.083 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.648 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.648 [221/268] Linking static target lib/librte_mbuf.a 00:02:16.214 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.214 [223/268] Linking static target lib/librte_cryptodev.a 00:02:16.214 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.147 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.147 [226/268] Linking static target lib/librte_ethdev.a 00:02:17.147 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.519 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.519 [229/268] Linking target lib/librte_eal.so.24.1 00:02:18.777 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.777 [231/268] Linking target lib/librte_pci.so.24.1 00:02:18.777 [232/268] Linking target lib/librte_ring.so.24.1 00:02:18.777 [233/268] Linking target lib/librte_timer.so.24.1 00:02:18.777 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.777 [235/268] Linking target lib/librte_meter.so.24.1 00:02:18.777 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.777 [237/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.777 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.777 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.777 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.777 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.035 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:19.035 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:19.035 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.035 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.035 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.035 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.035 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:19.292 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:19.292 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:19.292 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.292 [252/268] Linking target lib/librte_net.so.24.1 00:02:19.292 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:19.292 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.292 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.549 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:19.549 [257/268] Linking target lib/librte_security.so.24.1 00:02:19.549 [258/268] Linking target lib/librte_hash.so.24.1 00:02:19.549 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.115 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.498 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.498 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:21.498 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:21.498 [264/268] Linking target lib/librte_power.so.24.1 00:02:48.028 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.028 [266/268] Linking static target lib/librte_vhost.a 00:02:48.028 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.028 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:48.028 INFO: autodetecting backend as ninja 00:02:48.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:48.028 CC lib/ut_mock/mock.o 00:02:48.028 CC lib/log/log.o 00:02:48.028 CC lib/log/log_flags.o 00:02:48.028 CC lib/log/log_deprecated.o 00:02:48.028 CC lib/ut/ut.o 00:02:48.286 LIB libspdk_ut_mock.a 00:02:48.286 LIB libspdk_ut.a 00:02:48.286 LIB libspdk_log.a 00:02:48.286 SO libspdk_ut_mock.so.6.0 00:02:48.286 SO libspdk_ut.so.2.0 00:02:48.286 SO libspdk_log.so.7.1 00:02:48.286 SYMLINK libspdk_ut.so 00:02:48.286 SYMLINK libspdk_ut_mock.so 00:02:48.286 SYMLINK libspdk_log.so 00:02:48.544 CC lib/dma/dma.o 00:02:48.544 CXX lib/trace_parser/trace.o 00:02:48.544 CC lib/ioat/ioat.o 00:02:48.544 CC lib/util/base64.o 00:02:48.544 CC lib/util/bit_array.o 00:02:48.544 CC lib/util/cpuset.o 00:02:48.544 CC lib/util/crc16.o 00:02:48.544 CC lib/util/crc32.o 00:02:48.544 CC lib/util/crc32c.o 00:02:48.544 CC lib/util/crc32_ieee.o 00:02:48.544 CC lib/util/crc64.o 00:02:48.544 CC lib/util/dif.o 00:02:48.544 CC lib/util/fd.o 00:02:48.544 CC lib/util/fd_group.o 00:02:48.544 CC lib/util/file.o 00:02:48.544 CC lib/util/hexlify.o 00:02:48.544 CC lib/util/iov.o 00:02:48.544 CC lib/util/math.o 00:02:48.544 CC lib/util/net.o 00:02:48.544 CC lib/util/pipe.o 00:02:48.544 CC lib/util/strerror_tls.o 00:02:48.544 CC lib/util/string.o 00:02:48.544 CC lib/util/uuid.o 00:02:48.544 CC lib/util/zipf.o 00:02:48.544 CC lib/util/xor.o 00:02:48.544 CC lib/util/md5.o 00:02:48.544 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.544 CC lib/vfio_user/host/vfio_user.o 00:02:48.802 LIB libspdk_dma.a 00:02:48.802 SO libspdk_dma.so.5.0 00:02:48.802 SYMLINK libspdk_dma.so 00:02:48.802 LIB libspdk_ioat.a 00:02:49.060 SO libspdk_ioat.so.7.0 00:02:49.060 LIB libspdk_vfio_user.a 00:02:49.060 SYMLINK libspdk_ioat.so 00:02:49.060 SO libspdk_vfio_user.so.5.0 00:02:49.060 SYMLINK libspdk_vfio_user.so 00:02:49.318 LIB libspdk_util.a 00:02:49.318 SO libspdk_util.so.10.1 00:02:49.318 SYMLINK libspdk_util.so 00:02:49.576 CC lib/rdma_utils/rdma_utils.o 00:02:49.576 CC lib/conf/conf.o 00:02:49.576 CC lib/idxd/idxd.o 00:02:49.576 CC lib/env_dpdk/env.o 00:02:49.576 CC lib/vmd/vmd.o 00:02:49.576 CC lib/json/json_parse.o 00:02:49.576 CC lib/idxd/idxd_user.o 00:02:49.576 CC lib/env_dpdk/memory.o 00:02:49.576 CC lib/vmd/led.o 00:02:49.576 CC lib/json/json_util.o 00:02:49.576 CC lib/idxd/idxd_kernel.o 00:02:49.576 CC lib/env_dpdk/pci.o 00:02:49.576 CC lib/json/json_write.o 00:02:49.576 CC lib/env_dpdk/init.o 00:02:49.576 CC lib/env_dpdk/threads.o 00:02:49.576 CC lib/env_dpdk/pci_ioat.o 00:02:49.576 CC lib/env_dpdk/pci_virtio.o 00:02:49.576 CC lib/env_dpdk/pci_vmd.o 00:02:49.576 CC lib/env_dpdk/pci_idxd.o 00:02:49.576 CC lib/env_dpdk/pci_event.o 00:02:49.576 CC lib/env_dpdk/sigbus_handler.o 00:02:49.576 CC lib/env_dpdk/pci_dpdk.o 00:02:49.576 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.576 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.576 LIB libspdk_trace_parser.a 00:02:49.576 SO libspdk_trace_parser.so.6.0 00:02:49.833 SYMLINK libspdk_trace_parser.so 00:02:49.833 LIB libspdk_conf.a 00:02:49.833 SO libspdk_conf.so.6.0 00:02:49.833 LIB libspdk_rdma_utils.a 00:02:50.091 SYMLINK libspdk_conf.so 00:02:50.091 SO libspdk_rdma_utils.so.1.0 00:02:50.091 LIB libspdk_json.a 00:02:50.091 SO libspdk_json.so.6.0 00:02:50.091 SYMLINK libspdk_rdma_utils.so 00:02:50.091 SYMLINK libspdk_json.so 00:02:50.091 CC lib/rdma_provider/common.o 00:02:50.091 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.348 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.348 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.348 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.348 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.348 LIB libspdk_idxd.a 00:02:50.348 LIB libspdk_rdma_provider.a 00:02:50.605 SO libspdk_rdma_provider.so.7.0 00:02:50.605 SO libspdk_idxd.so.12.1 00:02:50.605 LIB libspdk_vmd.a 00:02:50.605 SYMLINK libspdk_rdma_provider.so 00:02:50.605 SO libspdk_vmd.so.6.0 00:02:50.605 SYMLINK libspdk_idxd.so 00:02:50.605 LIB libspdk_jsonrpc.a 00:02:50.605 SO libspdk_jsonrpc.so.6.0 00:02:50.605 SYMLINK libspdk_vmd.so 00:02:50.605 SYMLINK libspdk_jsonrpc.so 00:02:50.863 CC lib/rpc/rpc.o 00:02:51.121 LIB libspdk_rpc.a 00:02:51.121 SO libspdk_rpc.so.6.0 00:02:51.121 SYMLINK libspdk_rpc.so 00:02:51.378 CC lib/trace/trace.o 00:02:51.379 CC lib/keyring/keyring.o 00:02:51.379 CC lib/trace/trace_flags.o 00:02:51.379 CC lib/notify/notify.o 00:02:51.379 CC lib/keyring/keyring_rpc.o 00:02:51.379 CC lib/trace/trace_rpc.o 00:02:51.379 CC lib/notify/notify_rpc.o 00:02:51.379 LIB libspdk_notify.a 00:02:51.379 SO libspdk_notify.so.6.0 00:02:51.636 SYMLINK libspdk_notify.so 00:02:51.636 LIB libspdk_keyring.a 00:02:51.636 LIB libspdk_trace.a 00:02:51.636 SO libspdk_keyring.so.2.0 00:02:51.636 SO libspdk_trace.so.11.0 00:02:51.636 SYMLINK libspdk_keyring.so 00:02:51.636 SYMLINK libspdk_trace.so 00:02:51.893 CC lib/thread/thread.o 00:02:51.893 CC lib/sock/sock.o 00:02:51.893 CC lib/sock/sock_rpc.o 00:02:51.893 CC lib/thread/iobuf.o 00:02:52.459 LIB libspdk_sock.a 00:02:52.459 SO libspdk_sock.so.10.0 00:02:52.459 SYMLINK libspdk_sock.so 00:02:52.459 LIB libspdk_env_dpdk.a 00:02:52.716 SO libspdk_env_dpdk.so.15.1 00:02:52.716 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.716 CC lib/nvme/nvme_ctrlr.o 00:02:52.716 CC lib/nvme/nvme_fabric.o 00:02:52.716 CC lib/nvme/nvme_ns_cmd.o 00:02:52.716 CC lib/nvme/nvme_ns.o 00:02:52.716 CC lib/nvme/nvme_pcie_common.o 00:02:52.716 CC lib/nvme/nvme_pcie.o 00:02:52.716 CC lib/nvme/nvme_qpair.o 00:02:52.716 CC lib/nvme/nvme.o 00:02:52.716 CC lib/nvme/nvme_quirks.o 00:02:52.716 CC lib/nvme/nvme_transport.o 00:02:52.716 CC lib/nvme/nvme_discovery.o 00:02:52.716 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.716 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.716 CC lib/nvme/nvme_tcp.o 00:02:52.716 CC lib/nvme/nvme_opal.o 00:02:52.716 CC lib/nvme/nvme_io_msg.o 00:02:52.716 CC lib/nvme/nvme_poll_group.o 00:02:52.717 CC lib/nvme/nvme_zns.o 00:02:52.717 CC lib/nvme/nvme_stubs.o 00:02:52.717 CC lib/nvme/nvme_auth.o 00:02:52.717 CC lib/nvme/nvme_cuse.o 00:02:52.717 CC lib/nvme/nvme_rdma.o 00:02:52.974 SYMLINK libspdk_env_dpdk.so 00:02:53.908 LIB libspdk_thread.a 00:02:54.167 SO libspdk_thread.so.11.0 00:02:54.167 SYMLINK libspdk_thread.so 00:02:54.167 CC lib/init/json_config.o 00:02:54.167 CC lib/accel/accel.o 00:02:54.167 CC lib/blob/blobstore.o 00:02:54.167 CC lib/accel/accel_rpc.o 00:02:54.167 CC lib/init/subsystem.o 00:02:54.167 CC lib/blob/request.o 00:02:54.167 CC lib/accel/accel_sw.o 00:02:54.167 CC lib/init/subsystem_rpc.o 00:02:54.167 CC lib/init/rpc.o 00:02:54.167 CC lib/blob/zeroes.o 00:02:54.167 CC lib/blob/blob_bs_dev.o 00:02:54.167 CC lib/virtio/virtio.o 00:02:54.167 CC lib/fsdev/fsdev.o 00:02:54.167 CC lib/virtio/virtio_vhost_user.o 00:02:54.167 CC lib/fsdev/fsdev_io.o 00:02:54.167 CC lib/virtio/virtio_vfio_user.o 00:02:54.167 CC lib/fsdev/fsdev_rpc.o 00:02:54.167 CC lib/virtio/virtio_pci.o 00:02:54.733 LIB libspdk_init.a 00:02:54.733 SO libspdk_init.so.6.0 00:02:54.733 SYMLINK libspdk_init.so 00:02:54.733 LIB libspdk_virtio.a 00:02:54.733 SO libspdk_virtio.so.7.0 00:02:54.733 SYMLINK libspdk_virtio.so 00:02:54.733 CC lib/event/app.o 00:02:54.733 CC lib/event/reactor.o 00:02:54.733 CC lib/event/log_rpc.o 00:02:54.733 CC lib/event/app_rpc.o 00:02:54.733 CC lib/event/scheduler_static.o 00:02:55.298 LIB libspdk_fsdev.a 00:02:55.298 SO libspdk_fsdev.so.2.0 00:02:55.298 SYMLINK libspdk_fsdev.so 00:02:55.298 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:55.556 LIB libspdk_event.a 00:02:55.556 SO libspdk_event.so.14.0 00:02:55.556 SYMLINK libspdk_event.so 00:02:55.814 LIB libspdk_nvme.a 00:02:55.814 LIB libspdk_accel.a 00:02:55.814 SO libspdk_accel.so.16.0 00:02:55.814 SO libspdk_nvme.so.15.0 00:02:55.814 SYMLINK libspdk_accel.so 00:02:56.073 CC lib/bdev/bdev.o 00:02:56.073 CC lib/bdev/bdev_rpc.o 00:02:56.073 CC lib/bdev/bdev_zone.o 00:02:56.073 CC lib/bdev/part.o 00:02:56.073 CC lib/bdev/scsi_nvme.o 00:02:56.073 SYMLINK libspdk_nvme.so 00:02:56.331 LIB libspdk_fuse_dispatcher.a 00:02:56.331 SO libspdk_fuse_dispatcher.so.1.0 00:02:56.331 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.859 LIB libspdk_blob.a 00:02:58.859 SO libspdk_blob.so.11.0 00:02:58.859 SYMLINK libspdk_blob.so 00:02:59.117 CC lib/blobfs/blobfs.o 00:02:59.117 CC lib/blobfs/tree.o 00:02:59.117 CC lib/lvol/lvol.o 00:02:59.375 LIB libspdk_bdev.a 00:02:59.375 SO libspdk_bdev.so.17.0 00:02:59.644 SYMLINK libspdk_bdev.so 00:02:59.644 CC lib/nbd/nbd.o 00:02:59.644 CC lib/scsi/dev.o 00:02:59.644 CC lib/ublk/ublk.o 00:02:59.644 CC lib/nbd/nbd_rpc.o 00:02:59.644 CC lib/scsi/lun.o 00:02:59.644 CC lib/ublk/ublk_rpc.o 00:02:59.644 CC lib/nvmf/ctrlr.o 00:02:59.644 CC lib/ftl/ftl_core.o 00:02:59.644 CC lib/nvmf/ctrlr_discovery.o 00:02:59.644 CC lib/ftl/ftl_init.o 00:02:59.644 CC lib/scsi/port.o 00:02:59.644 CC lib/nvmf/ctrlr_bdev.o 00:02:59.644 CC lib/ftl/ftl_layout.o 00:02:59.644 CC lib/nvmf/subsystem.o 00:02:59.644 CC lib/scsi/scsi.o 00:02:59.644 CC lib/ftl/ftl_debug.o 00:02:59.644 CC lib/scsi/scsi_bdev.o 00:02:59.644 CC lib/nvmf/nvmf.o 00:02:59.644 CC lib/ftl/ftl_io.o 00:02:59.644 CC lib/nvmf/nvmf_rpc.o 00:02:59.644 CC lib/scsi/scsi_rpc.o 00:02:59.644 CC lib/scsi/scsi_pr.o 00:02:59.644 CC lib/ftl/ftl_sb.o 00:02:59.644 CC lib/nvmf/transport.o 00:02:59.644 CC lib/scsi/task.o 00:02:59.644 CC lib/ftl/ftl_l2p.o 00:02:59.644 CC lib/nvmf/tcp.o 00:02:59.644 CC lib/ftl/ftl_l2p_flat.o 00:02:59.644 CC lib/nvmf/stubs.o 00:02:59.644 CC lib/ftl/ftl_nv_cache.o 00:02:59.644 CC lib/nvmf/mdns_server.o 00:02:59.644 CC lib/ftl/ftl_band.o 00:02:59.644 CC lib/ftl/ftl_band_ops.o 00:02:59.644 CC lib/nvmf/rdma.o 00:02:59.644 CC lib/nvmf/auth.o 00:02:59.644 CC lib/ftl/ftl_writer.o 00:02:59.644 CC lib/ftl/ftl_rq.o 00:02:59.644 CC lib/ftl/ftl_reloc.o 00:02:59.644 CC lib/ftl/ftl_l2p_cache.o 00:02:59.644 CC lib/ftl/ftl_p2l.o 00:02:59.644 CC lib/ftl/ftl_p2l_log.o 00:02:59.644 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.644 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.644 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.644 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.644 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.211 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.211 CC lib/ftl/utils/ftl_conf.o 00:03:00.211 CC lib/ftl/utils/ftl_md.o 00:03:00.211 CC lib/ftl/utils/ftl_mempool.o 00:03:00.211 CC lib/ftl/utils/ftl_bitmap.o 00:03:00.211 CC lib/ftl/utils/ftl_property.o 00:03:00.211 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:00.211 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:00.211 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:00.211 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:00.211 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:00.482 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:00.482 LIB libspdk_blobfs.a 00:03:00.482 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:00.482 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:00.482 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:00.482 SO libspdk_blobfs.so.10.0 00:03:00.482 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:00.482 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:00.482 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:00.482 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:00.482 CC lib/ftl/base/ftl_base_dev.o 00:03:00.482 CC lib/ftl/base/ftl_base_bdev.o 00:03:00.482 CC lib/ftl/ftl_trace.o 00:03:00.482 SYMLINK libspdk_blobfs.so 00:03:00.776 LIB libspdk_nbd.a 00:03:00.776 SO libspdk_nbd.so.7.0 00:03:00.776 SYMLINK libspdk_nbd.so 00:03:00.776 LIB libspdk_lvol.a 00:03:00.776 SO libspdk_lvol.so.10.0 00:03:00.776 LIB libspdk_scsi.a 00:03:01.033 SO libspdk_scsi.so.9.0 00:03:01.033 SYMLINK libspdk_lvol.so 00:03:01.033 SYMLINK libspdk_scsi.so 00:03:01.033 LIB libspdk_ublk.a 00:03:01.033 SO libspdk_ublk.so.3.0 00:03:01.033 CC lib/iscsi/conn.o 00:03:01.033 CC lib/vhost/vhost.o 00:03:01.033 CC lib/vhost/vhost_rpc.o 00:03:01.033 CC lib/iscsi/init_grp.o 00:03:01.033 CC lib/iscsi/iscsi.o 00:03:01.033 CC lib/vhost/vhost_scsi.o 00:03:01.033 CC lib/iscsi/param.o 00:03:01.033 CC lib/vhost/vhost_blk.o 00:03:01.033 CC lib/vhost/rte_vhost_user.o 00:03:01.033 CC lib/iscsi/portal_grp.o 00:03:01.033 CC lib/iscsi/tgt_node.o 00:03:01.033 CC lib/iscsi/iscsi_subsystem.o 00:03:01.033 CC lib/iscsi/iscsi_rpc.o 00:03:01.033 CC lib/iscsi/task.o 00:03:01.291 SYMLINK libspdk_ublk.so 00:03:01.549 LIB libspdk_ftl.a 00:03:01.807 SO libspdk_ftl.so.9.0 00:03:02.064 SYMLINK libspdk_ftl.so 00:03:02.630 LIB libspdk_vhost.a 00:03:02.630 SO libspdk_vhost.so.8.0 00:03:02.630 SYMLINK libspdk_vhost.so 00:03:03.194 LIB libspdk_iscsi.a 00:03:03.194 SO libspdk_iscsi.so.8.0 00:03:03.194 LIB libspdk_nvmf.a 00:03:03.194 SYMLINK libspdk_iscsi.so 00:03:03.452 SO libspdk_nvmf.so.20.0 00:03:03.453 SYMLINK libspdk_nvmf.so 00:03:03.711 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.969 CC module/keyring/linux/keyring.o 00:03:03.969 CC module/sock/posix/posix.o 00:03:03.969 CC module/keyring/file/keyring.o 00:03:03.969 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.969 CC module/keyring/linux/keyring_rpc.o 00:03:03.969 CC module/accel/iaa/accel_iaa.o 00:03:03.969 CC module/keyring/file/keyring_rpc.o 00:03:03.969 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.969 CC module/accel/error/accel_error.o 00:03:03.969 CC module/accel/iaa/accel_iaa_rpc.o 00:03:03.969 CC module/accel/ioat/accel_ioat.o 00:03:03.969 CC module/accel/error/accel_error_rpc.o 00:03:03.969 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.969 CC module/accel/dsa/accel_dsa.o 00:03:03.969 CC module/fsdev/aio/fsdev_aio.o 00:03:03.969 CC module/blob/bdev/blob_bdev.o 00:03:03.969 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:03.969 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.969 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.969 CC module/fsdev/aio/linux_aio_mgr.o 00:03:03.969 LIB libspdk_env_dpdk_rpc.a 00:03:03.969 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.969 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.227 LIB libspdk_keyring_linux.a 00:03:04.227 LIB libspdk_keyring_file.a 00:03:04.227 LIB libspdk_scheduler_gscheduler.a 00:03:04.227 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.227 SO libspdk_keyring_linux.so.1.0 00:03:04.227 SO libspdk_keyring_file.so.2.0 00:03:04.227 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.227 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.227 LIB libspdk_accel_ioat.a 00:03:04.227 LIB libspdk_scheduler_dynamic.a 00:03:04.227 SO libspdk_accel_ioat.so.6.0 00:03:04.227 SYMLINK libspdk_keyring_linux.so 00:03:04.227 LIB libspdk_accel_iaa.a 00:03:04.227 LIB libspdk_accel_error.a 00:03:04.227 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.227 SYMLINK libspdk_keyring_file.so 00:03:04.227 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.227 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.227 SO libspdk_accel_iaa.so.3.0 00:03:04.227 SO libspdk_accel_error.so.2.0 00:03:04.227 SYMLINK libspdk_accel_ioat.so 00:03:04.228 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.228 SYMLINK libspdk_accel_error.so 00:03:04.228 SYMLINK libspdk_accel_iaa.so 00:03:04.228 LIB libspdk_blob_bdev.a 00:03:04.228 LIB libspdk_accel_dsa.a 00:03:04.228 SO libspdk_blob_bdev.so.11.0 00:03:04.228 SO libspdk_accel_dsa.so.5.0 00:03:04.486 SYMLINK libspdk_blob_bdev.so 00:03:04.486 SYMLINK libspdk_accel_dsa.so 00:03:04.486 CC module/bdev/gpt/gpt.o 00:03:04.486 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.486 CC module/bdev/error/vbdev_error.o 00:03:04.486 CC module/bdev/malloc/bdev_malloc.o 00:03:04.486 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.486 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.486 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.486 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.486 CC module/bdev/null/bdev_null.o 00:03:04.486 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.486 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.486 CC module/bdev/null/bdev_null_rpc.o 00:03:04.486 CC module/bdev/nvme/bdev_nvme.o 00:03:04.486 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.486 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.486 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.486 CC module/bdev/delay/vbdev_delay.o 00:03:04.486 CC module/bdev/nvme/nvme_rpc.o 00:03:04.486 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.486 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.486 CC module/bdev/split/vbdev_split.o 00:03:04.486 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.486 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.486 CC module/bdev/nvme/vbdev_opal.o 00:03:04.486 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.486 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.486 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.745 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.745 CC module/bdev/ftl/bdev_ftl.o 00:03:04.745 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.745 CC module/bdev/raid/bdev_raid.o 00:03:04.745 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.745 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.745 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.745 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.745 CC module/bdev/raid/raid0.o 00:03:04.745 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.745 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.745 CC module/bdev/raid/concat.o 00:03:04.745 CC module/bdev/raid/raid1.o 00:03:04.745 CC module/bdev/aio/bdev_aio.o 00:03:04.745 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.003 LIB libspdk_blobfs_bdev.a 00:03:05.003 SO libspdk_blobfs_bdev.so.6.0 00:03:05.003 SYMLINK libspdk_blobfs_bdev.so 00:03:05.003 LIB libspdk_bdev_split.a 00:03:05.003 LIB libspdk_fsdev_aio.a 00:03:05.003 LIB libspdk_bdev_error.a 00:03:05.003 SO libspdk_bdev_split.so.6.0 00:03:05.260 SO libspdk_fsdev_aio.so.1.0 00:03:05.260 SO libspdk_bdev_error.so.6.0 00:03:05.260 LIB libspdk_bdev_gpt.a 00:03:05.260 LIB libspdk_sock_posix.a 00:03:05.260 SO libspdk_bdev_gpt.so.6.0 00:03:05.260 SYMLINK libspdk_bdev_split.so 00:03:05.260 SO libspdk_sock_posix.so.6.0 00:03:05.260 SYMLINK libspdk_fsdev_aio.so 00:03:05.260 SYMLINK libspdk_bdev_error.so 00:03:05.260 LIB libspdk_bdev_ftl.a 00:03:05.260 LIB libspdk_bdev_null.a 00:03:05.260 LIB libspdk_bdev_zone_block.a 00:03:05.260 LIB libspdk_bdev_iscsi.a 00:03:05.260 SYMLINK libspdk_bdev_gpt.so 00:03:05.260 SO libspdk_bdev_ftl.so.6.0 00:03:05.260 SO libspdk_bdev_null.so.6.0 00:03:05.260 SO libspdk_bdev_zone_block.so.6.0 00:03:05.260 SO libspdk_bdev_iscsi.so.6.0 00:03:05.260 LIB libspdk_bdev_passthru.a 00:03:05.260 LIB libspdk_bdev_aio.a 00:03:05.260 SYMLINK libspdk_sock_posix.so 00:03:05.260 SO libspdk_bdev_passthru.so.6.0 00:03:05.260 SO libspdk_bdev_aio.so.6.0 00:03:05.260 LIB libspdk_bdev_malloc.a 00:03:05.260 LIB libspdk_bdev_delay.a 00:03:05.260 SYMLINK libspdk_bdev_ftl.so 00:03:05.260 SYMLINK libspdk_bdev_null.so 00:03:05.260 SYMLINK libspdk_bdev_iscsi.so 00:03:05.260 SYMLINK libspdk_bdev_zone_block.so 00:03:05.260 SO libspdk_bdev_delay.so.6.0 00:03:05.260 SO libspdk_bdev_malloc.so.6.0 00:03:05.260 SYMLINK libspdk_bdev_passthru.so 00:03:05.260 SYMLINK libspdk_bdev_aio.so 00:03:05.518 SYMLINK libspdk_bdev_delay.so 00:03:05.519 SYMLINK libspdk_bdev_malloc.so 00:03:05.519 LIB libspdk_bdev_virtio.a 00:03:05.519 SO libspdk_bdev_virtio.so.6.0 00:03:05.519 LIB libspdk_bdev_lvol.a 00:03:05.519 SYMLINK libspdk_bdev_virtio.so 00:03:05.519 SO libspdk_bdev_lvol.so.6.0 00:03:05.519 SYMLINK libspdk_bdev_lvol.so 00:03:06.084 LIB libspdk_bdev_raid.a 00:03:06.084 SO libspdk_bdev_raid.so.6.0 00:03:06.084 SYMLINK libspdk_bdev_raid.so 00:03:07.980 LIB libspdk_bdev_nvme.a 00:03:07.980 SO libspdk_bdev_nvme.so.7.1 00:03:08.236 SYMLINK libspdk_bdev_nvme.so 00:03:08.494 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.494 CC module/event/subsystems/vmd/vmd.o 00:03:08.494 CC module/event/subsystems/sock/sock.o 00:03:08.494 CC module/event/subsystems/fsdev/fsdev.o 00:03:08.494 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.494 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.494 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.494 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.494 CC module/event/subsystems/keyring/keyring.o 00:03:08.752 LIB libspdk_event_keyring.a 00:03:08.752 LIB libspdk_event_fsdev.a 00:03:08.752 LIB libspdk_event_vhost_blk.a 00:03:08.752 LIB libspdk_event_scheduler.a 00:03:08.752 LIB libspdk_event_sock.a 00:03:08.752 LIB libspdk_event_vmd.a 00:03:08.752 SO libspdk_event_keyring.so.1.0 00:03:08.752 SO libspdk_event_fsdev.so.1.0 00:03:08.752 LIB libspdk_event_iobuf.a 00:03:08.752 SO libspdk_event_vhost_blk.so.3.0 00:03:08.752 SO libspdk_event_scheduler.so.4.0 00:03:08.752 SO libspdk_event_sock.so.5.0 00:03:08.752 SO libspdk_event_vmd.so.6.0 00:03:08.752 SO libspdk_event_iobuf.so.3.0 00:03:08.752 SYMLINK libspdk_event_keyring.so 00:03:08.752 SYMLINK libspdk_event_fsdev.so 00:03:08.752 SYMLINK libspdk_event_vhost_blk.so 00:03:08.752 SYMLINK libspdk_event_scheduler.so 00:03:08.752 SYMLINK libspdk_event_sock.so 00:03:08.752 SYMLINK libspdk_event_vmd.so 00:03:08.752 SYMLINK libspdk_event_iobuf.so 00:03:09.009 CC module/event/subsystems/accel/accel.o 00:03:09.009 LIB libspdk_event_accel.a 00:03:09.269 SO libspdk_event_accel.so.6.0 00:03:09.269 SYMLINK libspdk_event_accel.so 00:03:09.269 CC module/event/subsystems/bdev/bdev.o 00:03:09.526 LIB libspdk_event_bdev.a 00:03:09.526 SO libspdk_event_bdev.so.6.0 00:03:09.526 SYMLINK libspdk_event_bdev.so 00:03:09.784 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.784 CC module/event/subsystems/nbd/nbd.o 00:03:09.784 CC module/event/subsystems/scsi/scsi.o 00:03:09.784 CC module/event/subsystems/ublk/ublk.o 00:03:09.784 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.041 LIB libspdk_event_ublk.a 00:03:10.041 LIB libspdk_event_nbd.a 00:03:10.041 SO libspdk_event_ublk.so.3.0 00:03:10.041 SO libspdk_event_nbd.so.6.0 00:03:10.041 LIB libspdk_event_scsi.a 00:03:10.041 SO libspdk_event_scsi.so.6.0 00:03:10.041 SYMLINK libspdk_event_ublk.so 00:03:10.041 SYMLINK libspdk_event_nbd.so 00:03:10.041 SYMLINK libspdk_event_scsi.so 00:03:10.041 LIB libspdk_event_nvmf.a 00:03:10.041 SO libspdk_event_nvmf.so.6.0 00:03:10.041 SYMLINK libspdk_event_nvmf.so 00:03:10.299 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.299 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.299 LIB libspdk_event_vhost_scsi.a 00:03:10.299 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.299 LIB libspdk_event_iscsi.a 00:03:10.299 SO libspdk_event_iscsi.so.6.0 00:03:10.557 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.557 SYMLINK libspdk_event_iscsi.so 00:03:10.557 SO libspdk.so.6.0 00:03:10.557 SYMLINK libspdk.so 00:03:10.822 CC app/spdk_lspci/spdk_lspci.o 00:03:10.822 CXX app/trace/trace.o 00:03:10.822 CC app/spdk_top/spdk_top.o 00:03:10.822 CC app/trace_record/trace_record.o 00:03:10.822 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.822 CC app/spdk_nvme_identify/identify.o 00:03:10.822 CC app/spdk_nvme_perf/perf.o 00:03:10.822 TEST_HEADER include/spdk/accel.h 00:03:10.822 CC test/rpc_client/rpc_client_test.o 00:03:10.822 TEST_HEADER include/spdk/accel_module.h 00:03:10.822 TEST_HEADER include/spdk/assert.h 00:03:10.822 TEST_HEADER include/spdk/barrier.h 00:03:10.822 TEST_HEADER include/spdk/base64.h 00:03:10.822 TEST_HEADER include/spdk/bdev.h 00:03:10.822 TEST_HEADER include/spdk/bdev_module.h 00:03:10.822 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.822 TEST_HEADER include/spdk/bit_array.h 00:03:10.822 TEST_HEADER include/spdk/bit_pool.h 00:03:10.822 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.822 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.822 TEST_HEADER include/spdk/blobfs.h 00:03:10.822 TEST_HEADER include/spdk/blob.h 00:03:10.822 TEST_HEADER include/spdk/conf.h 00:03:10.822 TEST_HEADER include/spdk/config.h 00:03:10.822 TEST_HEADER include/spdk/cpuset.h 00:03:10.822 TEST_HEADER include/spdk/crc16.h 00:03:10.822 TEST_HEADER include/spdk/crc32.h 00:03:10.822 TEST_HEADER include/spdk/crc64.h 00:03:10.822 TEST_HEADER include/spdk/dif.h 00:03:10.822 TEST_HEADER include/spdk/dma.h 00:03:10.822 TEST_HEADER include/spdk/endian.h 00:03:10.822 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.822 TEST_HEADER include/spdk/env.h 00:03:10.822 TEST_HEADER include/spdk/event.h 00:03:10.822 TEST_HEADER include/spdk/fd_group.h 00:03:10.822 TEST_HEADER include/spdk/fd.h 00:03:10.822 TEST_HEADER include/spdk/file.h 00:03:10.822 TEST_HEADER include/spdk/fsdev.h 00:03:10.822 TEST_HEADER include/spdk/fsdev_module.h 00:03:10.822 TEST_HEADER include/spdk/ftl.h 00:03:10.822 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:10.822 TEST_HEADER include/spdk/hexlify.h 00:03:10.822 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.822 TEST_HEADER include/spdk/histogram_data.h 00:03:10.822 TEST_HEADER include/spdk/idxd.h 00:03:10.822 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.822 TEST_HEADER include/spdk/init.h 00:03:10.822 TEST_HEADER include/spdk/ioat.h 00:03:10.822 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.822 TEST_HEADER include/spdk/json.h 00:03:10.822 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.822 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.822 TEST_HEADER include/spdk/keyring.h 00:03:10.822 TEST_HEADER include/spdk/keyring_module.h 00:03:10.822 TEST_HEADER include/spdk/likely.h 00:03:10.822 TEST_HEADER include/spdk/log.h 00:03:10.822 TEST_HEADER include/spdk/lvol.h 00:03:10.822 TEST_HEADER include/spdk/md5.h 00:03:10.822 TEST_HEADER include/spdk/memory.h 00:03:10.822 TEST_HEADER include/spdk/mmio.h 00:03:10.822 TEST_HEADER include/spdk/nbd.h 00:03:10.822 TEST_HEADER include/spdk/net.h 00:03:10.822 TEST_HEADER include/spdk/nvme.h 00:03:10.822 TEST_HEADER include/spdk/notify.h 00:03:10.822 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.822 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.822 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.822 TEST_HEADER include/spdk/nvmf.h 00:03:10.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.822 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.822 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.822 TEST_HEADER include/spdk/opal_spec.h 00:03:10.822 TEST_HEADER include/spdk/opal.h 00:03:10.822 TEST_HEADER include/spdk/pipe.h 00:03:10.822 TEST_HEADER include/spdk/pci_ids.h 00:03:10.822 TEST_HEADER include/spdk/reduce.h 00:03:10.822 TEST_HEADER include/spdk/queue.h 00:03:10.822 TEST_HEADER include/spdk/rpc.h 00:03:10.822 TEST_HEADER include/spdk/scheduler.h 00:03:10.822 TEST_HEADER include/spdk/scsi.h 00:03:10.822 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.822 TEST_HEADER include/spdk/sock.h 00:03:10.822 TEST_HEADER include/spdk/stdinc.h 00:03:10.822 TEST_HEADER include/spdk/string.h 00:03:10.822 TEST_HEADER include/spdk/thread.h 00:03:10.822 TEST_HEADER include/spdk/trace.h 00:03:10.822 TEST_HEADER include/spdk/tree.h 00:03:10.822 TEST_HEADER include/spdk/trace_parser.h 00:03:10.822 TEST_HEADER include/spdk/ublk.h 00:03:10.822 TEST_HEADER include/spdk/util.h 00:03:10.822 TEST_HEADER include/spdk/uuid.h 00:03:10.822 TEST_HEADER include/spdk/version.h 00:03:10.822 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.822 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.822 TEST_HEADER include/spdk/vhost.h 00:03:10.822 TEST_HEADER include/spdk/vmd.h 00:03:10.822 TEST_HEADER include/spdk/xor.h 00:03:10.822 TEST_HEADER include/spdk/zipf.h 00:03:10.822 CXX test/cpp_headers/accel_module.o 00:03:10.822 CXX test/cpp_headers/accel.o 00:03:10.822 CXX test/cpp_headers/assert.o 00:03:10.822 CXX test/cpp_headers/base64.o 00:03:10.822 CXX test/cpp_headers/barrier.o 00:03:10.822 CXX test/cpp_headers/bdev.o 00:03:10.822 CC app/spdk_dd/spdk_dd.o 00:03:10.822 CXX test/cpp_headers/bdev_module.o 00:03:10.822 CXX test/cpp_headers/bdev_zone.o 00:03:10.822 CXX test/cpp_headers/bit_array.o 00:03:10.822 CXX test/cpp_headers/bit_pool.o 00:03:10.822 CXX test/cpp_headers/blob_bdev.o 00:03:10.822 CC app/nvmf_tgt/nvmf_main.o 00:03:10.822 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.822 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.822 CXX test/cpp_headers/blobfs.o 00:03:10.822 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.822 CXX test/cpp_headers/blob.o 00:03:10.822 CXX test/cpp_headers/conf.o 00:03:10.822 CXX test/cpp_headers/config.o 00:03:10.822 CXX test/cpp_headers/cpuset.o 00:03:10.822 CXX test/cpp_headers/crc16.o 00:03:10.822 CC app/spdk_tgt/spdk_tgt.o 00:03:10.822 CXX test/cpp_headers/crc32.o 00:03:10.822 CC examples/util/zipf/zipf.o 00:03:10.822 CC test/thread/poller_perf/poller_perf.o 00:03:10.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.822 CC test/env/memory/memory_ut.o 00:03:10.822 CC test/env/vtophys/vtophys.o 00:03:10.822 CC test/app/histogram_perf/histogram_perf.o 00:03:10.822 CC app/fio/nvme/fio_plugin.o 00:03:10.822 CC examples/ioat/verify/verify.o 00:03:10.822 CC test/env/pci/pci_ut.o 00:03:10.822 CC test/app/jsoncat/jsoncat.o 00:03:10.822 CC test/app/stub/stub.o 00:03:10.822 CC examples/ioat/perf/perf.o 00:03:11.088 CC test/dma/test_dma/test_dma.o 00:03:11.088 CC app/fio/bdev/fio_plugin.o 00:03:11.088 CC test/app/bdev_svc/bdev_svc.o 00:03:11.088 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.088 LINK spdk_lspci 00:03:11.088 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.088 LINK rpc_client_test 00:03:11.088 LINK spdk_nvme_discover 00:03:11.349 LINK poller_perf 00:03:11.349 LINK zipf 00:03:11.349 LINK nvmf_tgt 00:03:11.349 LINK jsoncat 00:03:11.349 LINK interrupt_tgt 00:03:11.349 LINK histogram_perf 00:03:11.349 CXX test/cpp_headers/crc64.o 00:03:11.349 CXX test/cpp_headers/dif.o 00:03:11.349 LINK vtophys 00:03:11.349 CXX test/cpp_headers/dma.o 00:03:11.349 CXX test/cpp_headers/endian.o 00:03:11.349 CXX test/cpp_headers/env_dpdk.o 00:03:11.349 CXX test/cpp_headers/env.o 00:03:11.349 LINK env_dpdk_post_init 00:03:11.349 CXX test/cpp_headers/event.o 00:03:11.349 CXX test/cpp_headers/fd_group.o 00:03:11.349 LINK iscsi_tgt 00:03:11.349 CXX test/cpp_headers/fd.o 00:03:11.349 CXX test/cpp_headers/file.o 00:03:11.349 CXX test/cpp_headers/fsdev.o 00:03:11.349 CXX test/cpp_headers/fsdev_module.o 00:03:11.349 LINK stub 00:03:11.349 CXX test/cpp_headers/ftl.o 00:03:11.349 CXX test/cpp_headers/fuse_dispatcher.o 00:03:11.349 CXX test/cpp_headers/gpt_spec.o 00:03:11.349 LINK spdk_tgt 00:03:11.349 CXX test/cpp_headers/hexlify.o 00:03:11.349 LINK bdev_svc 00:03:11.349 CXX test/cpp_headers/histogram_data.o 00:03:11.349 LINK spdk_trace_record 00:03:11.349 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.349 LINK verify 00:03:11.349 LINK ioat_perf 00:03:11.349 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.608 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.608 CXX test/cpp_headers/idxd.o 00:03:11.608 CXX test/cpp_headers/idxd_spec.o 00:03:11.608 CXX test/cpp_headers/init.o 00:03:11.608 CXX test/cpp_headers/ioat.o 00:03:11.608 CXX test/cpp_headers/ioat_spec.o 00:03:11.608 CXX test/cpp_headers/iscsi_spec.o 00:03:11.608 CXX test/cpp_headers/json.o 00:03:11.608 CXX test/cpp_headers/jsonrpc.o 00:03:11.608 LINK spdk_dd 00:03:11.608 CXX test/cpp_headers/keyring.o 00:03:11.608 CXX test/cpp_headers/keyring_module.o 00:03:11.608 CXX test/cpp_headers/likely.o 00:03:11.608 CXX test/cpp_headers/log.o 00:03:11.877 CXX test/cpp_headers/lvol.o 00:03:11.877 CXX test/cpp_headers/md5.o 00:03:11.877 LINK spdk_trace 00:03:11.877 CXX test/cpp_headers/memory.o 00:03:11.877 CXX test/cpp_headers/mmio.o 00:03:11.877 CXX test/cpp_headers/nbd.o 00:03:11.877 CXX test/cpp_headers/net.o 00:03:11.877 CXX test/cpp_headers/notify.o 00:03:11.877 CXX test/cpp_headers/nvme.o 00:03:11.877 CXX test/cpp_headers/nvme_intel.o 00:03:11.877 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.877 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.877 CXX test/cpp_headers/nvme_spec.o 00:03:11.877 CXX test/cpp_headers/nvme_zns.o 00:03:11.877 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.877 LINK pci_ut 00:03:11.877 CC test/event/reactor/reactor.o 00:03:11.877 CC examples/sock/hello_world/hello_sock.o 00:03:11.877 CC test/event/reactor_perf/reactor_perf.o 00:03:11.877 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.877 CC test/event/event_perf/event_perf.o 00:03:11.877 CC examples/thread/thread/thread_ex.o 00:03:12.136 CC test/event/app_repeat/app_repeat.o 00:03:12.137 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.137 CC examples/idxd/perf/perf.o 00:03:12.137 CXX test/cpp_headers/nvmf.o 00:03:12.137 CC test/event/scheduler/scheduler.o 00:03:12.137 CXX test/cpp_headers/nvmf_spec.o 00:03:12.137 CXX test/cpp_headers/nvmf_transport.o 00:03:12.137 CXX test/cpp_headers/opal.o 00:03:12.137 CXX test/cpp_headers/opal_spec.o 00:03:12.137 CC examples/vmd/led/led.o 00:03:12.137 CXX test/cpp_headers/pci_ids.o 00:03:12.137 CXX test/cpp_headers/queue.o 00:03:12.137 CXX test/cpp_headers/pipe.o 00:03:12.137 CXX test/cpp_headers/reduce.o 00:03:12.137 CXX test/cpp_headers/rpc.o 00:03:12.137 CXX test/cpp_headers/scheduler.o 00:03:12.137 LINK nvme_fuzz 00:03:12.137 LINK test_dma 00:03:12.137 CXX test/cpp_headers/scsi.o 00:03:12.137 CXX test/cpp_headers/scsi_spec.o 00:03:12.137 CXX test/cpp_headers/sock.o 00:03:12.137 CXX test/cpp_headers/stdinc.o 00:03:12.137 LINK spdk_bdev 00:03:12.137 CXX test/cpp_headers/string.o 00:03:12.137 LINK reactor 00:03:12.137 CXX test/cpp_headers/thread.o 00:03:12.137 LINK reactor_perf 00:03:12.137 LINK spdk_nvme 00:03:12.398 LINK mem_callbacks 00:03:12.398 CXX test/cpp_headers/trace.o 00:03:12.398 CXX test/cpp_headers/trace_parser.o 00:03:12.398 CXX test/cpp_headers/tree.o 00:03:12.398 CXX test/cpp_headers/ublk.o 00:03:12.398 LINK event_perf 00:03:12.398 CXX test/cpp_headers/util.o 00:03:12.398 LINK lsvmd 00:03:12.398 LINK app_repeat 00:03:12.398 CXX test/cpp_headers/uuid.o 00:03:12.398 CXX test/cpp_headers/version.o 00:03:12.398 CC app/vhost/vhost.o 00:03:12.398 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.398 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.398 CXX test/cpp_headers/vhost.o 00:03:12.398 CXX test/cpp_headers/vmd.o 00:03:12.398 CXX test/cpp_headers/xor.o 00:03:12.398 CXX test/cpp_headers/zipf.o 00:03:12.398 LINK led 00:03:12.398 LINK thread 00:03:12.657 LINK vhost_fuzz 00:03:12.657 LINK hello_sock 00:03:12.657 LINK scheduler 00:03:12.657 LINK vhost 00:03:12.915 CC test/nvme/simple_copy/simple_copy.o 00:03:12.915 CC test/nvme/err_injection/err_injection.o 00:03:12.915 CC test/nvme/aer/aer.o 00:03:12.915 CC test/nvme/sgl/sgl.o 00:03:12.915 CC test/nvme/reset/reset.o 00:03:12.915 CC test/nvme/startup/startup.o 00:03:12.915 CC test/nvme/e2edp/nvme_dp.o 00:03:12.915 CC test/nvme/reserve/reserve.o 00:03:12.915 CC test/nvme/boot_partition/boot_partition.o 00:03:12.915 CC test/nvme/compliance/nvme_compliance.o 00:03:12.915 CC test/nvme/cuse/cuse.o 00:03:12.915 CC test/nvme/overhead/overhead.o 00:03:12.915 CC test/nvme/connect_stress/connect_stress.o 00:03:12.915 CC test/nvme/fdp/fdp.o 00:03:12.915 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.915 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.915 LINK spdk_nvme_perf 00:03:12.915 LINK idxd_perf 00:03:12.915 LINK spdk_nvme_identify 00:03:12.915 CC test/accel/dif/dif.o 00:03:12.915 CC test/blobfs/mkfs/mkfs.o 00:03:12.915 CC test/lvol/esnap/esnap.o 00:03:12.915 LINK spdk_top 00:03:12.915 CC examples/nvme/reconnect/reconnect.o 00:03:12.915 CC examples/nvme/arbitration/arbitration.o 00:03:12.915 CC examples/nvme/hello_world/hello_world.o 00:03:12.915 CC examples/nvme/abort/abort.o 00:03:12.915 CC examples/nvme/hotplug/hotplug.o 00:03:12.915 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.915 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.915 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.173 CC examples/accel/perf/accel_perf.o 00:03:13.173 CC examples/blob/hello_world/hello_blob.o 00:03:13.173 CC examples/blob/cli/blobcli.o 00:03:13.173 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.173 LINK fused_ordering 00:03:13.173 LINK startup 00:03:13.173 LINK connect_stress 00:03:13.173 LINK reserve 00:03:13.173 LINK simple_copy 00:03:13.173 LINK boot_partition 00:03:13.173 LINK mkfs 00:03:13.173 LINK err_injection 00:03:13.173 LINK doorbell_aers 00:03:13.173 LINK aer 00:03:13.173 LINK pmr_persistence 00:03:13.431 LINK sgl 00:03:13.431 LINK memory_ut 00:03:13.431 LINK nvme_dp 00:03:13.431 LINK reset 00:03:13.431 LINK hello_blob 00:03:13.431 LINK cmb_copy 00:03:13.431 LINK overhead 00:03:13.431 LINK hotplug 00:03:13.431 LINK hello_fsdev 00:03:13.431 LINK hello_world 00:03:13.431 LINK fdp 00:03:13.431 LINK nvme_compliance 00:03:13.689 LINK arbitration 00:03:13.689 LINK abort 00:03:13.689 LINK reconnect 00:03:13.946 LINK blobcli 00:03:13.946 LINK accel_perf 00:03:13.946 LINK nvme_manage 00:03:13.946 LINK dif 00:03:14.204 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.204 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.462 CC test/bdev/bdevio/bdevio.o 00:03:14.462 LINK hello_bdev 00:03:14.462 LINK iscsi_fuzz 00:03:14.720 LINK cuse 00:03:14.977 LINK bdevio 00:03:15.234 LINK bdevperf 00:03:15.799 CC examples/nvmf/nvmf/nvmf.o 00:03:16.057 LINK nvmf 00:03:20.240 LINK esnap 00:03:20.240 00:03:20.240 real 1m21.083s 00:03:20.240 user 13m10.466s 00:03:20.240 sys 2m36.345s 00:03:20.240 18:10:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:20.240 18:10:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:20.240 ************************************ 00:03:20.240 END TEST make 00:03:20.240 ************************************ 00:03:20.240 18:10:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:20.240 18:10:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:20.240 18:10:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:20.240 18:10:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.240 18:10:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:20.240 18:10:18 -- pm/common@44 -- $ pid=2739885 00:03:20.240 18:10:18 -- pm/common@50 -- $ kill -TERM 2739885 00:03:20.240 18:10:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.240 18:10:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:20.240 18:10:18 -- pm/common@44 -- $ pid=2739887 00:03:20.240 18:10:18 -- pm/common@50 -- $ kill -TERM 2739887 00:03:20.240 18:10:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.240 18:10:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:20.240 18:10:18 -- pm/common@44 -- $ pid=2739889 00:03:20.240 18:10:18 -- pm/common@50 -- $ kill -TERM 2739889 00:03:20.240 18:10:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.240 18:10:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:20.240 18:10:18 -- pm/common@44 -- $ pid=2739919 00:03:20.241 18:10:18 -- pm/common@50 -- $ sudo -E kill -TERM 2739919 00:03:20.241 18:10:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:20.241 18:10:18 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.241 18:10:18 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:20.241 18:10:18 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:20.241 18:10:18 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:20.241 18:10:18 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:20.241 18:10:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.241 18:10:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.241 18:10:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.241 18:10:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.241 18:10:18 -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.241 18:10:18 -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.241 18:10:18 -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.241 18:10:18 -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.241 18:10:18 -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.241 18:10:18 -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.241 18:10:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.241 18:10:18 -- scripts/common.sh@344 -- # case "$op" in 00:03:20.241 18:10:18 -- scripts/common.sh@345 -- # : 1 00:03:20.241 18:10:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.241 18:10:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.241 18:10:18 -- scripts/common.sh@365 -- # decimal 1 00:03:20.241 18:10:18 -- scripts/common.sh@353 -- # local d=1 00:03:20.241 18:10:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.241 18:10:18 -- scripts/common.sh@355 -- # echo 1 00:03:20.241 18:10:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.241 18:10:18 -- scripts/common.sh@366 -- # decimal 2 00:03:20.241 18:10:18 -- scripts/common.sh@353 -- # local d=2 00:03:20.241 18:10:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.241 18:10:18 -- scripts/common.sh@355 -- # echo 2 00:03:20.241 18:10:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.241 18:10:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.241 18:10:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.241 18:10:18 -- scripts/common.sh@368 -- # return 0 00:03:20.241 18:10:18 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.241 18:10:18 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.241 --rc genhtml_branch_coverage=1 00:03:20.241 --rc genhtml_function_coverage=1 00:03:20.241 --rc genhtml_legend=1 00:03:20.241 --rc geninfo_all_blocks=1 00:03:20.241 --rc geninfo_unexecuted_blocks=1 00:03:20.241 00:03:20.241 ' 00:03:20.241 18:10:18 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.241 --rc genhtml_branch_coverage=1 00:03:20.241 --rc genhtml_function_coverage=1 00:03:20.241 --rc genhtml_legend=1 00:03:20.241 --rc geninfo_all_blocks=1 00:03:20.241 --rc geninfo_unexecuted_blocks=1 00:03:20.241 00:03:20.241 ' 00:03:20.241 18:10:18 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.241 --rc genhtml_branch_coverage=1 00:03:20.241 --rc genhtml_function_coverage=1 00:03:20.241 --rc genhtml_legend=1 00:03:20.241 --rc geninfo_all_blocks=1 00:03:20.241 --rc geninfo_unexecuted_blocks=1 00:03:20.241 00:03:20.241 ' 00:03:20.241 18:10:18 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.241 --rc genhtml_branch_coverage=1 00:03:20.241 --rc genhtml_function_coverage=1 00:03:20.241 --rc genhtml_legend=1 00:03:20.241 --rc geninfo_all_blocks=1 00:03:20.241 --rc geninfo_unexecuted_blocks=1 00:03:20.241 00:03:20.241 ' 00:03:20.241 18:10:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:20.241 18:10:18 -- nvmf/common.sh@7 -- # uname -s 00:03:20.241 18:10:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.241 18:10:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.241 18:10:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.241 18:10:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.241 18:10:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.241 18:10:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.241 18:10:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.241 18:10:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.241 18:10:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.241 18:10:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.241 18:10:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:20.241 18:10:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:20.241 18:10:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.241 18:10:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.241 18:10:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:20.241 18:10:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:20.241 18:10:18 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:20.241 18:10:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:20.241 18:10:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.241 18:10:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.241 18:10:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.241 18:10:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.241 18:10:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.241 18:10:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.241 18:10:18 -- paths/export.sh@5 -- # export PATH 00:03:20.241 18:10:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.241 18:10:18 -- nvmf/common.sh@51 -- # : 0 00:03:20.241 18:10:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:20.241 18:10:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:20.241 18:10:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:20.241 18:10:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.241 18:10:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.241 18:10:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:20.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:20.241 18:10:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:20.241 18:10:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:20.241 18:10:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:20.241 18:10:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.241 18:10:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.241 18:10:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.241 18:10:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.241 18:10:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.241 18:10:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.241 18:10:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.241 18:10:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:20.241 18:10:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:20.241 18:10:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:20.241 18:10:18 -- spdk/autotest.sh@48 -- # udevadm_pid=2800829 00:03:20.241 18:10:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:20.241 18:10:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:20.241 18:10:18 -- pm/common@17 -- # local monitor 00:03:20.241 18:10:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.241 18:10:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.241 18:10:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.241 18:10:18 -- pm/common@21 -- # date +%s 00:03:20.241 18:10:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.241 18:10:18 -- pm/common@21 -- # date +%s 00:03:20.241 18:10:18 -- pm/common@21 -- # date +%s 00:03:20.241 18:10:18 -- pm/common@25 -- # sleep 1 00:03:20.241 18:10:18 -- pm/common@21 -- # date +%s 00:03:20.241 18:10:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731949818 00:03:20.241 18:10:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731949818 00:03:20.241 18:10:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731949818 00:03:20.241 18:10:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731949818 00:03:20.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731949818_collect-cpu-load.pm.log 00:03:20.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731949818_collect-vmstat.pm.log 00:03:20.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731949818_collect-cpu-temp.pm.log 00:03:20.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731949818_collect-bmc-pm.bmc.pm.log 00:03:21.622 18:10:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:21.622 18:10:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:21.622 18:10:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.622 18:10:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.622 18:10:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:21.622 18:10:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:21.622 18:10:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.622 18:10:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:21.622 18:10:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.622 18:10:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.622 18:10:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:21.622 18:10:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.622 18:10:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:21.622 18:10:19 -- common/autotest_common.sh@1457 -- # uname 00:03:21.622 18:10:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:21.622 18:10:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:21.622 18:10:19 -- common/autotest_common.sh@1477 -- # uname 00:03:21.622 18:10:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:21.622 18:10:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:21.622 18:10:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:21.622 lcov: LCOV version 1.15 00:03:21.622 18:10:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:48.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:48.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.125 18:10:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:58.125 18:10:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.125 18:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:58.125 18:10:55 -- spdk/autotest.sh@78 -- # rm -f 00:03:58.125 18:10:55 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.125 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:58.125 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:58.125 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:58.125 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:58.125 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:58.125 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:58.125 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:58.125 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:58.386 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:58.386 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:58.386 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:58.386 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:58.386 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:58.386 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:58.386 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:58.386 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:58.386 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:58.386 18:10:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:58.386 18:10:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:58.386 18:10:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:58.386 18:10:56 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:58.386 18:10:56 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.386 18:10:56 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:58.386 18:10:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:58.386 18:10:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.386 18:10:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.386 18:10:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:58.386 18:10:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.386 18:10:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.386 18:10:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:58.386 18:10:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:58.386 18:10:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.386 No valid GPT data, bailing 00:03:58.386 18:10:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.645 18:10:56 -- scripts/common.sh@394 -- # pt= 00:03:58.645 18:10:56 -- scripts/common.sh@395 -- # return 1 00:03:58.645 18:10:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.645 1+0 records in 00:03:58.645 1+0 records out 00:03:58.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212199 s, 494 MB/s 00:03:58.645 18:10:56 -- spdk/autotest.sh@105 -- # sync 00:03:58.645 18:10:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.645 18:10:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.645 18:10:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.663 18:10:58 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.663 18:10:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.663 18:10:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.663 18:10:58 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:01.600 Hugepages 00:04:01.600 node hugesize free / total 00:04:01.600 node0 1048576kB 0 / 0 00:04:01.600 node0 2048kB 0 / 0 00:04:01.600 node1 1048576kB 0 / 0 00:04:01.600 node1 2048kB 0 / 0 00:04:01.600 00:04:01.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.600 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:01.600 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:01.600 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:01.859 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:01.859 18:11:00 -- spdk/autotest.sh@117 -- # uname -s 00:04:01.859 18:11:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:01.859 18:11:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:01.859 18:11:00 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.236 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.236 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.236 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.177 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.177 18:11:02 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:05.118 18:11:03 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:05.118 18:11:03 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:05.118 18:11:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.118 18:11:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:05.118 18:11:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.118 18:11:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.118 18:11:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.118 18:11:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.118 18:11:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.375 18:11:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:05.375 18:11:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:05.375 18:11:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.313 Waiting for block devices as requested 00:04:06.313 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:06.573 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:06.573 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:06.573 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:06.833 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:06.833 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:06.833 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:06.833 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:06.833 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:07.094 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:07.094 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:07.094 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:07.353 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:07.353 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:07.353 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:07.353 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:07.353 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:07.611 18:11:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.611 18:11:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:07.611 18:11:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:07.611 18:11:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:07.611 18:11:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:07.611 18:11:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:07.612 18:11:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.612 18:11:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.612 18:11:05 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:07.612 18:11:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.612 18:11:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.612 18:11:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:07.612 18:11:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.612 18:11:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.612 18:11:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.612 18:11:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.612 18:11:05 -- common/autotest_common.sh@1543 -- # continue 00:04:07.612 18:11:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:07.612 18:11:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.612 18:11:05 -- common/autotest_common.sh@10 -- # set +x 00:04:07.612 18:11:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:07.612 18:11:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.612 18:11:05 -- common/autotest_common.sh@10 -- # set +x 00:04:07.612 18:11:05 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.988 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:08.988 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:08.988 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:09.926 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.926 18:11:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:09.926 18:11:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.926 18:11:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.926 18:11:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:09.926 18:11:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:09.926 18:11:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:09.926 18:11:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:09.926 18:11:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:09.926 18:11:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:09.926 18:11:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:09.926 18:11:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:09.926 18:11:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:09.926 18:11:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:09.926 18:11:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.926 18:11:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:09.926 18:11:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.187 18:11:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:10.187 18:11:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:10.187 18:11:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.187 18:11:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:10.187 18:11:08 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:10.187 18:11:08 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:10.187 18:11:08 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:10.187 18:11:08 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:10.187 18:11:08 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:10.187 18:11:08 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:10.187 18:11:08 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2811073 00:04:10.187 18:11:08 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.187 18:11:08 -- common/autotest_common.sh@1585 -- # waitforlisten 2811073 00:04:10.187 18:11:08 -- common/autotest_common.sh@835 -- # '[' -z 2811073 ']' 00:04:10.187 18:11:08 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.187 18:11:08 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.187 18:11:08 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.187 18:11:08 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.187 18:11:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.187 [2024-11-18 18:11:08.415240] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:10.187 [2024-11-18 18:11:08.415379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811073 ] 00:04:10.445 [2024-11-18 18:11:08.544732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.445 [2024-11-18 18:11:08.683937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.386 18:11:09 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.386 18:11:09 -- common/autotest_common.sh@868 -- # return 0 00:04:11.386 18:11:09 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:11.386 18:11:09 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:11.386 18:11:09 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:14.691 nvme0n1 00:04:14.691 18:11:12 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:14.950 [2024-11-18 18:11:13.034546] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:14.950 [2024-11-18 18:11:13.034631] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:14.950 request: 00:04:14.950 { 00:04:14.950 "nvme_ctrlr_name": "nvme0", 00:04:14.950 "password": "test", 00:04:14.950 "method": "bdev_nvme_opal_revert", 00:04:14.950 "req_id": 1 00:04:14.950 } 00:04:14.950 Got JSON-RPC error response 00:04:14.950 response: 00:04:14.950 { 00:04:14.950 "code": -32603, 00:04:14.950 "message": "Internal error" 00:04:14.950 } 00:04:14.950 18:11:13 -- common/autotest_common.sh@1591 -- # true 00:04:14.950 18:11:13 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:14.950 18:11:13 -- common/autotest_common.sh@1595 -- # killprocess 2811073 00:04:14.950 18:11:13 -- common/autotest_common.sh@954 -- # '[' -z 2811073 ']' 00:04:14.950 18:11:13 -- common/autotest_common.sh@958 -- # kill -0 2811073 00:04:14.950 18:11:13 -- common/autotest_common.sh@959 -- # uname 00:04:14.950 18:11:13 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.950 18:11:13 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2811073 00:04:14.950 18:11:13 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.950 18:11:13 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.950 18:11:13 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2811073' 00:04:14.950 killing process with pid 2811073 00:04:14.950 18:11:13 -- common/autotest_common.sh@973 -- # kill 2811073 00:04:14.950 18:11:13 -- common/autotest_common.sh@978 -- # wait 2811073 00:04:19.149 18:11:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.150 18:11:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.150 18:11:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.150 18:11:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.150 18:11:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.150 18:11:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.150 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:04:19.150 18:11:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:19.150 18:11:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.150 18:11:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.150 18:11:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.150 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:04:19.150 ************************************ 00:04:19.150 START TEST env 00:04:19.150 ************************************ 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.150 * Looking for test storage... 00:04:19.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.150 18:11:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.150 18:11:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.150 18:11:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.150 18:11:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.150 18:11:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.150 18:11:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.150 18:11:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.150 18:11:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.150 18:11:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.150 18:11:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.150 18:11:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.150 18:11:16 env -- scripts/common.sh@344 -- # case "$op" in 00:04:19.150 18:11:16 env -- scripts/common.sh@345 -- # : 1 00:04:19.150 18:11:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.150 18:11:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.150 18:11:16 env -- scripts/common.sh@365 -- # decimal 1 00:04:19.150 18:11:16 env -- scripts/common.sh@353 -- # local d=1 00:04:19.150 18:11:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.150 18:11:16 env -- scripts/common.sh@355 -- # echo 1 00:04:19.150 18:11:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.150 18:11:16 env -- scripts/common.sh@366 -- # decimal 2 00:04:19.150 18:11:16 env -- scripts/common.sh@353 -- # local d=2 00:04:19.150 18:11:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.150 18:11:16 env -- scripts/common.sh@355 -- # echo 2 00:04:19.150 18:11:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.150 18:11:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.150 18:11:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.150 18:11:16 env -- scripts/common.sh@368 -- # return 0 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.150 --rc genhtml_branch_coverage=1 00:04:19.150 --rc genhtml_function_coverage=1 00:04:19.150 --rc genhtml_legend=1 00:04:19.150 --rc geninfo_all_blocks=1 00:04:19.150 --rc geninfo_unexecuted_blocks=1 00:04:19.150 00:04:19.150 ' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.150 --rc genhtml_branch_coverage=1 00:04:19.150 --rc genhtml_function_coverage=1 00:04:19.150 --rc genhtml_legend=1 00:04:19.150 --rc geninfo_all_blocks=1 00:04:19.150 --rc geninfo_unexecuted_blocks=1 00:04:19.150 00:04:19.150 ' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.150 --rc genhtml_branch_coverage=1 00:04:19.150 --rc genhtml_function_coverage=1 00:04:19.150 --rc genhtml_legend=1 00:04:19.150 --rc geninfo_all_blocks=1 00:04:19.150 --rc geninfo_unexecuted_blocks=1 00:04:19.150 00:04:19.150 ' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.150 --rc genhtml_branch_coverage=1 00:04:19.150 --rc genhtml_function_coverage=1 00:04:19.150 --rc genhtml_legend=1 00:04:19.150 --rc geninfo_all_blocks=1 00:04:19.150 --rc geninfo_unexecuted_blocks=1 00:04:19.150 00:04:19.150 ' 00:04:19.150 18:11:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.150 18:11:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.150 18:11:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.150 ************************************ 00:04:19.150 START TEST env_memory 00:04:19.150 ************************************ 00:04:19.150 18:11:16 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:19.150 00:04:19.150 00:04:19.150 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.150 http://cunit.sourceforge.net/ 00:04:19.150 00:04:19.150 00:04:19.150 Suite: memory 00:04:19.150 Test: alloc and free memory map ...[2024-11-18 18:11:17.016144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:19.150 passed 00:04:19.150 Test: mem map translation ...[2024-11-18 18:11:17.055739] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:19.150 [2024-11-18 18:11:17.055783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:19.150 [2024-11-18 18:11:17.055855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:19.150 [2024-11-18 18:11:17.055886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:19.150 passed 00:04:19.150 Test: mem map registration ...[2024-11-18 18:11:17.119157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:19.150 [2024-11-18 18:11:17.119200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:19.150 passed 00:04:19.150 Test: mem map adjacent registrations ...passed 00:04:19.150 00:04:19.150 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.150 suites 1 1 n/a 0 0 00:04:19.150 tests 4 4 4 0 0 00:04:19.150 asserts 152 152 152 0 n/a 00:04:19.150 00:04:19.150 Elapsed time = 0.228 seconds 00:04:19.150 00:04:19.150 real 0m0.249s 00:04:19.150 user 0m0.236s 00:04:19.150 sys 0m0.013s 00:04:19.150 18:11:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.150 18:11:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:19.150 ************************************ 00:04:19.150 END TEST env_memory 00:04:19.150 ************************************ 00:04:19.150 18:11:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:19.150 18:11:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.150 18:11:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.150 18:11:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.150 ************************************ 00:04:19.150 START TEST env_vtophys 00:04:19.150 ************************************ 00:04:19.150 18:11:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:19.150 EAL: lib.eal log level changed from notice to debug 00:04:19.150 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.150 EAL: Detected lcore 1 as core 1 on socket 0 00:04:19.150 EAL: Detected lcore 2 as core 2 on socket 0 00:04:19.150 EAL: Detected lcore 3 as core 3 on socket 0 00:04:19.150 EAL: Detected lcore 4 as core 4 on socket 0 00:04:19.150 EAL: Detected lcore 5 as core 5 on socket 0 00:04:19.150 EAL: Detected lcore 6 as core 8 on socket 0 00:04:19.150 EAL: Detected lcore 7 as core 9 on socket 0 00:04:19.150 EAL: Detected lcore 8 as core 10 on socket 0 00:04:19.150 EAL: Detected lcore 9 as core 11 on socket 0 00:04:19.150 EAL: Detected lcore 10 as core 12 on socket 0 00:04:19.150 EAL: Detected lcore 11 as core 13 on socket 0 00:04:19.150 EAL: Detected lcore 12 as core 0 on socket 1 00:04:19.150 EAL: Detected lcore 13 as core 1 on socket 1 00:04:19.150 EAL: Detected lcore 14 as core 2 on socket 1 00:04:19.150 EAL: Detected lcore 15 as core 3 on socket 1 00:04:19.150 EAL: Detected lcore 16 as core 4 on socket 1 00:04:19.150 EAL: Detected lcore 17 as core 5 on socket 1 00:04:19.150 EAL: Detected lcore 18 as core 8 on socket 1 00:04:19.150 EAL: Detected lcore 19 as core 9 on socket 1 00:04:19.150 EAL: Detected lcore 20 as core 10 on socket 1 00:04:19.150 EAL: Detected lcore 21 as core 11 on socket 1 00:04:19.150 EAL: Detected lcore 22 as core 12 on socket 1 00:04:19.150 EAL: Detected lcore 23 as core 13 on socket 1 00:04:19.150 EAL: Detected lcore 24 as core 0 on socket 0 00:04:19.150 EAL: Detected lcore 25 as core 1 on socket 0 00:04:19.150 EAL: Detected lcore 26 as core 2 on socket 0 00:04:19.151 EAL: Detected lcore 27 as core 3 on socket 0 00:04:19.151 EAL: Detected lcore 28 as core 4 on socket 0 00:04:19.151 EAL: Detected lcore 29 as core 5 on socket 0 00:04:19.151 EAL: Detected lcore 30 as core 8 on socket 0 00:04:19.151 EAL: Detected lcore 31 as core 9 on socket 0 00:04:19.151 EAL: Detected lcore 32 as core 10 on socket 0 00:04:19.151 EAL: Detected lcore 33 as core 11 on socket 0 00:04:19.151 EAL: Detected lcore 34 as core 12 on socket 0 00:04:19.151 EAL: Detected lcore 35 as core 13 on socket 0 00:04:19.151 EAL: Detected lcore 36 as core 0 on socket 1 00:04:19.151 EAL: Detected lcore 37 as core 1 on socket 1 00:04:19.151 EAL: Detected lcore 38 as core 2 on socket 1 00:04:19.151 EAL: Detected lcore 39 as core 3 on socket 1 00:04:19.151 EAL: Detected lcore 40 as core 4 on socket 1 00:04:19.151 EAL: Detected lcore 41 as core 5 on socket 1 00:04:19.151 EAL: Detected lcore 42 as core 8 on socket 1 00:04:19.151 EAL: Detected lcore 43 as core 9 on socket 1 00:04:19.151 EAL: Detected lcore 44 as core 10 on socket 1 00:04:19.151 EAL: Detected lcore 45 as core 11 on socket 1 00:04:19.151 EAL: Detected lcore 46 as core 12 on socket 1 00:04:19.151 EAL: Detected lcore 47 as core 13 on socket 1 00:04:19.151 EAL: Maximum logical cores by configuration: 128 00:04:19.151 EAL: Detected CPU lcores: 48 00:04:19.151 EAL: Detected NUMA nodes: 2 00:04:19.151 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.151 EAL: Detected shared linkage of DPDK 00:04:19.151 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.151 EAL: Bus pci wants IOVA as 'DC' 00:04:19.151 EAL: Buses did not request a specific IOVA mode. 00:04:19.151 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:19.151 EAL: Selected IOVA mode 'VA' 00:04:19.151 EAL: Probing VFIO support... 00:04:19.151 EAL: IOMMU type 1 (Type 1) is supported 00:04:19.151 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:19.151 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:19.151 EAL: VFIO support initialized 00:04:19.151 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.151 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.151 EAL: Setting up physically contiguous memory... 00:04:19.151 EAL: Setting maximum number of open files to 524288 00:04:19.151 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.151 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:19.151 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.151 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:19.151 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.151 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:19.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:19.151 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.151 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:19.151 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:19.151 EAL: Hugepages will be freed exactly as allocated. 00:04:19.151 EAL: No shared files mode enabled, IPC is disabled 00:04:19.151 EAL: No shared files mode enabled, IPC is disabled 00:04:19.151 EAL: TSC frequency is ~2700000 KHz 00:04:19.151 EAL: Main lcore 0 is ready (tid=7f652aad4a40;cpuset=[0]) 00:04:19.151 EAL: Trying to obtain current memory policy. 00:04:19.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.151 EAL: Restoring previous memory policy: 0 00:04:19.151 EAL: request: mp_malloc_sync 00:04:19.151 EAL: No shared files mode enabled, IPC is disabled 00:04:19.151 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.151 EAL: No shared files mode enabled, IPC is disabled 00:04:19.151 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.151 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.151 00:04:19.151 00:04:19.151 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.151 http://cunit.sourceforge.net/ 00:04:19.151 00:04:19.151 00:04:19.151 Suite: components_suite 00:04:19.722 Test: vtophys_malloc_test ...passed 00:04:19.722 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.722 EAL: Restoring previous memory policy: 4 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.722 EAL: Trying to obtain current memory policy. 00:04:19.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.722 EAL: Restoring previous memory policy: 4 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.722 EAL: Trying to obtain current memory policy. 00:04:19.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.722 EAL: Restoring previous memory policy: 4 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.722 EAL: Trying to obtain current memory policy. 00:04:19.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.722 EAL: Restoring previous memory policy: 4 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.722 EAL: Trying to obtain current memory policy. 00:04:19.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.722 EAL: Restoring previous memory policy: 4 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.722 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.722 EAL: request: mp_malloc_sync 00:04:19.722 EAL: No shared files mode enabled, IPC is disabled 00:04:19.722 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.981 EAL: Trying to obtain current memory policy. 00:04:19.981 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.981 EAL: Restoring previous memory policy: 4 00:04:19.981 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.981 EAL: request: mp_malloc_sync 00:04:19.981 EAL: No shared files mode enabled, IPC is disabled 00:04:19.981 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.981 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.981 EAL: request: mp_malloc_sync 00:04:19.981 EAL: No shared files mode enabled, IPC is disabled 00:04:19.981 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.241 EAL: Trying to obtain current memory policy. 00:04:20.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.241 EAL: Restoring previous memory policy: 4 00:04:20.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.241 EAL: request: mp_malloc_sync 00:04:20.241 EAL: No shared files mode enabled, IPC is disabled 00:04:20.241 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.502 EAL: request: mp_malloc_sync 00:04:20.502 EAL: No shared files mode enabled, IPC is disabled 00:04:20.502 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.502 EAL: Trying to obtain current memory policy. 00:04:20.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.760 EAL: Restoring previous memory policy: 4 00:04:20.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.760 EAL: request: mp_malloc_sync 00:04:20.760 EAL: No shared files mode enabled, IPC is disabled 00:04:20.760 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.024 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.284 EAL: request: mp_malloc_sync 00:04:21.284 EAL: No shared files mode enabled, IPC is disabled 00:04:21.284 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.544 EAL: Trying to obtain current memory policy. 00:04:21.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.805 EAL: Restoring previous memory policy: 4 00:04:21.805 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.805 EAL: request: mp_malloc_sync 00:04:21.805 EAL: No shared files mode enabled, IPC is disabled 00:04:21.805 EAL: Heap on socket 0 was expanded by 514MB 00:04:22.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.746 EAL: request: mp_malloc_sync 00:04:22.746 EAL: No shared files mode enabled, IPC is disabled 00:04:22.746 EAL: Heap on socket 0 was shrunk by 514MB 00:04:23.684 EAL: Trying to obtain current memory policy. 00:04:23.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.944 EAL: Restoring previous memory policy: 4 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was expanded by 1026MB 00:04:25.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.114 EAL: request: mp_malloc_sync 00:04:26.114 EAL: No shared files mode enabled, IPC is disabled 00:04:26.114 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.492 passed 00:04:27.492 00:04:27.492 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.492 suites 1 1 n/a 0 0 00:04:27.492 tests 2 2 2 0 0 00:04:27.492 asserts 497 497 497 0 n/a 00:04:27.492 00:04:27.492 Elapsed time = 8.257 seconds 00:04:27.492 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.492 EAL: request: mp_malloc_sync 00:04:27.492 EAL: No shared files mode enabled, IPC is disabled 00:04:27.492 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.492 EAL: No shared files mode enabled, IPC is disabled 00:04:27.492 EAL: No shared files mode enabled, IPC is disabled 00:04:27.492 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 00:04:27.493 real 0m8.531s 00:04:27.493 user 0m7.368s 00:04:27.493 sys 0m1.103s 00:04:27.493 18:11:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.493 18:11:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.493 ************************************ 00:04:27.493 END TEST env_vtophys 00:04:27.493 ************************************ 00:04:27.493 18:11:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.493 18:11:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.493 18:11:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.493 18:11:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.752 ************************************ 00:04:27.752 START TEST env_pci 00:04:27.752 ************************************ 00:04:27.752 18:11:25 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.752 00:04:27.752 00:04:27.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.752 http://cunit.sourceforge.net/ 00:04:27.752 00:04:27.752 00:04:27.752 Suite: pci 00:04:27.752 Test: pci_hook ...[2024-11-18 18:11:25.875508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2813170 has claimed it 00:04:27.752 EAL: Cannot find device (10000:00:01.0) 00:04:27.752 EAL: Failed to attach device on primary process 00:04:27.752 passed 00:04:27.752 00:04:27.752 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.752 suites 1 1 n/a 0 0 00:04:27.752 tests 1 1 1 0 0 00:04:27.752 asserts 25 25 25 0 n/a 00:04:27.752 00:04:27.752 Elapsed time = 0.044 seconds 00:04:27.752 00:04:27.752 real 0m0.097s 00:04:27.752 user 0m0.039s 00:04:27.752 sys 0m0.057s 00:04:27.752 18:11:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.752 18:11:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.752 ************************************ 00:04:27.752 END TEST env_pci 00:04:27.752 ************************************ 00:04:27.752 18:11:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.752 18:11:25 env -- env/env.sh@15 -- # uname 00:04:27.752 18:11:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.752 18:11:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.752 18:11:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.752 18:11:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:27.752 18:11:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.752 18:11:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.752 ************************************ 00:04:27.752 START TEST env_dpdk_post_init 00:04:27.752 ************************************ 00:04:27.752 18:11:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.752 EAL: Detected CPU lcores: 48 00:04:27.752 EAL: Detected NUMA nodes: 2 00:04:27.752 EAL: Detected shared linkage of DPDK 00:04:27.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.012 EAL: Selected IOVA mode 'VA' 00:04:28.012 EAL: VFIO support initialized 00:04:28.013 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.013 EAL: Using IOMMU type 1 (Type 1) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:28.013 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:28.273 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:29.214 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:32.507 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:32.507 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:32.507 Starting DPDK initialization... 00:04:32.507 Starting SPDK post initialization... 00:04:32.507 SPDK NVMe probe 00:04:32.507 Attaching to 0000:88:00.0 00:04:32.507 Attached to 0000:88:00.0 00:04:32.507 Cleaning up... 00:04:32.507 00:04:32.507 real 0m4.581s 00:04:32.507 user 0m3.136s 00:04:32.507 sys 0m0.498s 00:04:32.507 18:11:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.507 18:11:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.507 ************************************ 00:04:32.507 END TEST env_dpdk_post_init 00:04:32.507 ************************************ 00:04:32.507 18:11:30 env -- env/env.sh@26 -- # uname 00:04:32.507 18:11:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.507 18:11:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.507 18:11:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.507 18:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.507 18:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.507 ************************************ 00:04:32.507 START TEST env_mem_callbacks 00:04:32.507 ************************************ 00:04:32.507 18:11:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.507 EAL: Detected CPU lcores: 48 00:04:32.507 EAL: Detected NUMA nodes: 2 00:04:32.507 EAL: Detected shared linkage of DPDK 00:04:32.507 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.507 EAL: Selected IOVA mode 'VA' 00:04:32.507 EAL: VFIO support initialized 00:04:32.507 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.507 00:04:32.507 00:04:32.507 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.507 http://cunit.sourceforge.net/ 00:04:32.507 00:04:32.507 00:04:32.507 Suite: memory 00:04:32.507 Test: test ... 00:04:32.507 register 0x200000200000 2097152 00:04:32.507 malloc 3145728 00:04:32.507 register 0x200000400000 4194304 00:04:32.507 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.507 malloc 64 00:04:32.507 buf 0x2000004ffec0 len 64 PASSED 00:04:32.507 malloc 4194304 00:04:32.507 register 0x200000800000 6291456 00:04:32.507 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.507 free 0x2000004fffc0 3145728 00:04:32.507 free 0x2000004ffec0 64 00:04:32.508 unregister 0x200000400000 4194304 PASSED 00:04:32.508 free 0x2000009fffc0 4194304 00:04:32.508 unregister 0x200000800000 6291456 PASSED 00:04:32.508 malloc 8388608 00:04:32.508 register 0x200000400000 10485760 00:04:32.508 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.508 free 0x2000005fffc0 8388608 00:04:32.508 unregister 0x200000400000 10485760 PASSED 00:04:32.508 passed 00:04:32.508 00:04:32.508 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.508 suites 1 1 n/a 0 0 00:04:32.508 tests 1 1 1 0 0 00:04:32.508 asserts 15 15 15 0 n/a 00:04:32.508 00:04:32.508 Elapsed time = 0.060 seconds 00:04:32.508 00:04:32.508 real 0m0.186s 00:04:32.508 user 0m0.088s 00:04:32.508 sys 0m0.097s 00:04:32.508 18:11:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.508 18:11:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 ************************************ 00:04:32.508 END TEST env_mem_callbacks 00:04:32.508 ************************************ 00:04:32.508 00:04:32.508 real 0m14.035s 00:04:32.508 user 0m11.052s 00:04:32.508 sys 0m1.995s 00:04:32.508 18:11:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.508 18:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 ************************************ 00:04:32.508 END TEST env 00:04:32.508 ************************************ 00:04:32.767 18:11:30 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.767 18:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.767 18:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.767 18:11:30 -- common/autotest_common.sh@10 -- # set +x 00:04:32.767 ************************************ 00:04:32.767 START TEST rpc 00:04:32.767 ************************************ 00:04:32.767 18:11:30 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.767 * Looking for test storage... 00:04:32.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.767 18:11:30 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.767 18:11:30 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.767 18:11:30 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.767 18:11:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.767 18:11:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.767 18:11:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.767 18:11:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.767 18:11:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.767 18:11:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.767 18:11:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.767 18:11:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.767 18:11:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.767 18:11:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.767 18:11:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.767 18:11:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.767 18:11:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.767 18:11:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.767 18:11:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.767 --rc genhtml_branch_coverage=1 00:04:32.767 --rc genhtml_function_coverage=1 00:04:32.767 --rc genhtml_legend=1 00:04:32.767 --rc geninfo_all_blocks=1 00:04:32.767 --rc geninfo_unexecuted_blocks=1 00:04:32.767 00:04:32.767 ' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.767 --rc genhtml_branch_coverage=1 00:04:32.767 --rc genhtml_function_coverage=1 00:04:32.767 --rc genhtml_legend=1 00:04:32.767 --rc geninfo_all_blocks=1 00:04:32.767 --rc geninfo_unexecuted_blocks=1 00:04:32.767 00:04:32.767 ' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.767 --rc genhtml_branch_coverage=1 00:04:32.767 --rc genhtml_function_coverage=1 00:04:32.767 --rc genhtml_legend=1 00:04:32.767 --rc geninfo_all_blocks=1 00:04:32.767 --rc geninfo_unexecuted_blocks=1 00:04:32.767 00:04:32.767 ' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.767 --rc genhtml_branch_coverage=1 00:04:32.767 --rc genhtml_function_coverage=1 00:04:32.767 --rc genhtml_legend=1 00:04:32.767 --rc geninfo_all_blocks=1 00:04:32.767 --rc geninfo_unexecuted_blocks=1 00:04:32.767 00:04:32.767 ' 00:04:32.767 18:11:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2813964 00:04:32.767 18:11:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:32.767 18:11:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.767 18:11:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2813964 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 2813964 ']' 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.767 18:11:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.026 [2024-11-18 18:11:31.127319] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:33.026 [2024-11-18 18:11:31.127473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813964 ] 00:04:33.026 [2024-11-18 18:11:31.261241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.284 [2024-11-18 18:11:31.391687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.284 [2024-11-18 18:11:31.391769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2813964' to capture a snapshot of events at runtime. 00:04:33.285 [2024-11-18 18:11:31.391798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.285 [2024-11-18 18:11:31.391820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.285 [2024-11-18 18:11:31.391850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2813964 for offline analysis/debug. 00:04:33.285 [2024-11-18 18:11:31.393400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.225 18:11:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.225 18:11:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.225 18:11:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.225 18:11:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.225 18:11:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.225 18:11:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.225 18:11:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.225 18:11:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.225 18:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.225 ************************************ 00:04:34.225 START TEST rpc_integrity 00:04:34.225 ************************************ 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.225 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.225 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.226 { 00:04:34.226 "name": "Malloc0", 00:04:34.226 "aliases": [ 00:04:34.226 "6f5625f9-f686-4897-86af-a801f0482d2c" 00:04:34.226 ], 00:04:34.226 "product_name": "Malloc disk", 00:04:34.226 "block_size": 512, 00:04:34.226 "num_blocks": 16384, 00:04:34.226 "uuid": "6f5625f9-f686-4897-86af-a801f0482d2c", 00:04:34.226 "assigned_rate_limits": { 00:04:34.226 "rw_ios_per_sec": 0, 00:04:34.226 "rw_mbytes_per_sec": 0, 00:04:34.226 "r_mbytes_per_sec": 0, 00:04:34.226 "w_mbytes_per_sec": 0 00:04:34.226 }, 00:04:34.226 "claimed": false, 00:04:34.226 "zoned": false, 00:04:34.226 "supported_io_types": { 00:04:34.226 "read": true, 00:04:34.226 "write": true, 00:04:34.226 "unmap": true, 00:04:34.226 "flush": true, 00:04:34.226 "reset": true, 00:04:34.226 "nvme_admin": false, 00:04:34.226 "nvme_io": false, 00:04:34.226 "nvme_io_md": false, 00:04:34.226 "write_zeroes": true, 00:04:34.226 "zcopy": true, 00:04:34.226 "get_zone_info": false, 00:04:34.226 "zone_management": false, 00:04:34.226 "zone_append": false, 00:04:34.226 "compare": false, 00:04:34.226 "compare_and_write": false, 00:04:34.226 "abort": true, 00:04:34.226 "seek_hole": false, 00:04:34.226 "seek_data": false, 00:04:34.226 "copy": true, 00:04:34.226 "nvme_iov_md": false 00:04:34.226 }, 00:04:34.226 "memory_domains": [ 00:04:34.226 { 00:04:34.226 "dma_device_id": "system", 00:04:34.226 "dma_device_type": 1 00:04:34.226 }, 00:04:34.226 { 00:04:34.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.226 "dma_device_type": 2 00:04:34.226 } 00:04:34.226 ], 00:04:34.226 "driver_specific": {} 00:04:34.226 } 00:04:34.226 ]' 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 [2024-11-18 18:11:32.454385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.226 [2024-11-18 18:11:32.454453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.226 [2024-11-18 18:11:32.454499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:34.226 [2024-11-18 18:11:32.454527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.226 [2024-11-18 18:11:32.457304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.226 [2024-11-18 18:11:32.457344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.226 Passthru0 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.226 { 00:04:34.226 "name": "Malloc0", 00:04:34.226 "aliases": [ 00:04:34.226 "6f5625f9-f686-4897-86af-a801f0482d2c" 00:04:34.226 ], 00:04:34.226 "product_name": "Malloc disk", 00:04:34.226 "block_size": 512, 00:04:34.226 "num_blocks": 16384, 00:04:34.226 "uuid": "6f5625f9-f686-4897-86af-a801f0482d2c", 00:04:34.226 "assigned_rate_limits": { 00:04:34.226 "rw_ios_per_sec": 0, 00:04:34.226 "rw_mbytes_per_sec": 0, 00:04:34.226 "r_mbytes_per_sec": 0, 00:04:34.226 "w_mbytes_per_sec": 0 00:04:34.226 }, 00:04:34.226 "claimed": true, 00:04:34.226 "claim_type": "exclusive_write", 00:04:34.226 "zoned": false, 00:04:34.226 "supported_io_types": { 00:04:34.226 "read": true, 00:04:34.226 "write": true, 00:04:34.226 "unmap": true, 00:04:34.226 "flush": true, 00:04:34.226 "reset": true, 00:04:34.226 "nvme_admin": false, 00:04:34.226 "nvme_io": false, 00:04:34.226 "nvme_io_md": false, 00:04:34.226 "write_zeroes": true, 00:04:34.226 "zcopy": true, 00:04:34.226 "get_zone_info": false, 00:04:34.226 "zone_management": false, 00:04:34.226 "zone_append": false, 00:04:34.226 "compare": false, 00:04:34.226 "compare_and_write": false, 00:04:34.226 "abort": true, 00:04:34.226 "seek_hole": false, 00:04:34.226 "seek_data": false, 00:04:34.226 "copy": true, 00:04:34.226 "nvme_iov_md": false 00:04:34.226 }, 00:04:34.226 "memory_domains": [ 00:04:34.226 { 00:04:34.226 "dma_device_id": "system", 00:04:34.226 "dma_device_type": 1 00:04:34.226 }, 00:04:34.226 { 00:04:34.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.226 "dma_device_type": 2 00:04:34.226 } 00:04:34.226 ], 00:04:34.226 "driver_specific": {} 00:04:34.226 }, 00:04:34.226 { 00:04:34.226 "name": "Passthru0", 00:04:34.226 "aliases": [ 00:04:34.226 "fb0077e5-0c3c-556b-8bd0-1104e6f4af70" 00:04:34.226 ], 00:04:34.226 "product_name": "passthru", 00:04:34.226 "block_size": 512, 00:04:34.226 "num_blocks": 16384, 00:04:34.226 "uuid": "fb0077e5-0c3c-556b-8bd0-1104e6f4af70", 00:04:34.226 "assigned_rate_limits": { 00:04:34.226 "rw_ios_per_sec": 0, 00:04:34.226 "rw_mbytes_per_sec": 0, 00:04:34.226 "r_mbytes_per_sec": 0, 00:04:34.226 "w_mbytes_per_sec": 0 00:04:34.226 }, 00:04:34.226 "claimed": false, 00:04:34.226 "zoned": false, 00:04:34.226 "supported_io_types": { 00:04:34.226 "read": true, 00:04:34.226 "write": true, 00:04:34.226 "unmap": true, 00:04:34.226 "flush": true, 00:04:34.226 "reset": true, 00:04:34.226 "nvme_admin": false, 00:04:34.226 "nvme_io": false, 00:04:34.226 "nvme_io_md": false, 00:04:34.226 "write_zeroes": true, 00:04:34.226 "zcopy": true, 00:04:34.226 "get_zone_info": false, 00:04:34.226 "zone_management": false, 00:04:34.226 "zone_append": false, 00:04:34.226 "compare": false, 00:04:34.226 "compare_and_write": false, 00:04:34.226 "abort": true, 00:04:34.226 "seek_hole": false, 00:04:34.226 "seek_data": false, 00:04:34.226 "copy": true, 00:04:34.226 "nvme_iov_md": false 00:04:34.226 }, 00:04:34.226 "memory_domains": [ 00:04:34.226 { 00:04:34.226 "dma_device_id": "system", 00:04:34.226 "dma_device_type": 1 00:04:34.226 }, 00:04:34.226 { 00:04:34.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.226 "dma_device_type": 2 00:04:34.226 } 00:04:34.226 ], 00:04:34.226 "driver_specific": { 00:04:34.226 "passthru": { 00:04:34.226 "name": "Passthru0", 00:04:34.226 "base_bdev_name": "Malloc0" 00:04:34.226 } 00:04:34.226 } 00:04:34.226 } 00:04:34.226 ]' 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.226 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.486 18:11:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.486 00:04:34.486 real 0m0.262s 00:04:34.486 user 0m0.152s 00:04:34.486 sys 0m0.024s 00:04:34.486 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.486 18:11:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.486 ************************************ 00:04:34.486 END TEST rpc_integrity 00:04:34.486 ************************************ 00:04:34.486 18:11:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.486 18:11:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.486 18:11:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.486 18:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.486 ************************************ 00:04:34.486 START TEST rpc_plugins 00:04:34.486 ************************************ 00:04:34.486 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.487 { 00:04:34.487 "name": "Malloc1", 00:04:34.487 "aliases": [ 00:04:34.487 "089d4c13-b655-43f4-8d39-a91362a34309" 00:04:34.487 ], 00:04:34.487 "product_name": "Malloc disk", 00:04:34.487 "block_size": 4096, 00:04:34.487 "num_blocks": 256, 00:04:34.487 "uuid": "089d4c13-b655-43f4-8d39-a91362a34309", 00:04:34.487 "assigned_rate_limits": { 00:04:34.487 "rw_ios_per_sec": 0, 00:04:34.487 "rw_mbytes_per_sec": 0, 00:04:34.487 "r_mbytes_per_sec": 0, 00:04:34.487 "w_mbytes_per_sec": 0 00:04:34.487 }, 00:04:34.487 "claimed": false, 00:04:34.487 "zoned": false, 00:04:34.487 "supported_io_types": { 00:04:34.487 "read": true, 00:04:34.487 "write": true, 00:04:34.487 "unmap": true, 00:04:34.487 "flush": true, 00:04:34.487 "reset": true, 00:04:34.487 "nvme_admin": false, 00:04:34.487 "nvme_io": false, 00:04:34.487 "nvme_io_md": false, 00:04:34.487 "write_zeroes": true, 00:04:34.487 "zcopy": true, 00:04:34.487 "get_zone_info": false, 00:04:34.487 "zone_management": false, 00:04:34.487 "zone_append": false, 00:04:34.487 "compare": false, 00:04:34.487 "compare_and_write": false, 00:04:34.487 "abort": true, 00:04:34.487 "seek_hole": false, 00:04:34.487 "seek_data": false, 00:04:34.487 "copy": true, 00:04:34.487 "nvme_iov_md": false 00:04:34.487 }, 00:04:34.487 "memory_domains": [ 00:04:34.487 { 00:04:34.487 "dma_device_id": "system", 00:04:34.487 "dma_device_type": 1 00:04:34.487 }, 00:04:34.487 { 00:04:34.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.487 "dma_device_type": 2 00:04:34.487 } 00:04:34.487 ], 00:04:34.487 "driver_specific": {} 00:04:34.487 } 00:04:34.487 ]' 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.487 18:11:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.487 00:04:34.487 real 0m0.123s 00:04:34.487 user 0m0.082s 00:04:34.487 sys 0m0.006s 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 ************************************ 00:04:34.487 END TEST rpc_plugins 00:04:34.487 ************************************ 00:04:34.487 18:11:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.487 18:11:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.487 18:11:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.487 18:11:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 ************************************ 00:04:34.487 START TEST rpc_trace_cmd_test 00:04:34.487 ************************************ 00:04:34.487 18:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.487 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.487 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.487 18:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.487 18:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.747 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2813964", 00:04:34.747 "tpoint_group_mask": "0x8", 00:04:34.747 "iscsi_conn": { 00:04:34.747 "mask": "0x2", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "scsi": { 00:04:34.747 "mask": "0x4", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "bdev": { 00:04:34.747 "mask": "0x8", 00:04:34.747 "tpoint_mask": "0xffffffffffffffff" 00:04:34.747 }, 00:04:34.747 "nvmf_rdma": { 00:04:34.747 "mask": "0x10", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "nvmf_tcp": { 00:04:34.747 "mask": "0x20", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "ftl": { 00:04:34.747 "mask": "0x40", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "blobfs": { 00:04:34.747 "mask": "0x80", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "dsa": { 00:04:34.747 "mask": "0x200", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "thread": { 00:04:34.747 "mask": "0x400", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "nvme_pcie": { 00:04:34.747 "mask": "0x800", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "iaa": { 00:04:34.747 "mask": "0x1000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "nvme_tcp": { 00:04:34.747 "mask": "0x2000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "bdev_nvme": { 00:04:34.747 "mask": "0x4000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "sock": { 00:04:34.747 "mask": "0x8000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "blob": { 00:04:34.747 "mask": "0x10000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "bdev_raid": { 00:04:34.747 "mask": "0x20000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 }, 00:04:34.747 "scheduler": { 00:04:34.747 "mask": "0x40000", 00:04:34.747 "tpoint_mask": "0x0" 00:04:34.747 } 00:04:34.747 }' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.747 18:11:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.747 18:11:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.747 00:04:34.747 real 0m0.196s 00:04:34.747 user 0m0.174s 00:04:34.747 sys 0m0.014s 00:04:34.747 18:11:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.747 18:11:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 ************************************ 00:04:34.747 END TEST rpc_trace_cmd_test 00:04:34.747 ************************************ 00:04:34.747 18:11:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.747 18:11:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.747 18:11:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.747 18:11:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.747 18:11:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.747 18:11:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 ************************************ 00:04:34.747 START TEST rpc_daemon_integrity 00:04:34.747 ************************************ 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.747 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.006 { 00:04:35.006 "name": "Malloc2", 00:04:35.006 "aliases": [ 00:04:35.006 "588bc45e-3ecb-4e2c-b03c-dd94feccaf4e" 00:04:35.006 ], 00:04:35.006 "product_name": "Malloc disk", 00:04:35.006 "block_size": 512, 00:04:35.006 "num_blocks": 16384, 00:04:35.006 "uuid": "588bc45e-3ecb-4e2c-b03c-dd94feccaf4e", 00:04:35.006 "assigned_rate_limits": { 00:04:35.006 "rw_ios_per_sec": 0, 00:04:35.006 "rw_mbytes_per_sec": 0, 00:04:35.006 "r_mbytes_per_sec": 0, 00:04:35.006 "w_mbytes_per_sec": 0 00:04:35.006 }, 00:04:35.006 "claimed": false, 00:04:35.006 "zoned": false, 00:04:35.006 "supported_io_types": { 00:04:35.006 "read": true, 00:04:35.006 "write": true, 00:04:35.006 "unmap": true, 00:04:35.006 "flush": true, 00:04:35.006 "reset": true, 00:04:35.006 "nvme_admin": false, 00:04:35.006 "nvme_io": false, 00:04:35.006 "nvme_io_md": false, 00:04:35.006 "write_zeroes": true, 00:04:35.006 "zcopy": true, 00:04:35.006 "get_zone_info": false, 00:04:35.006 "zone_management": false, 00:04:35.006 "zone_append": false, 00:04:35.006 "compare": false, 00:04:35.006 "compare_and_write": false, 00:04:35.006 "abort": true, 00:04:35.006 "seek_hole": false, 00:04:35.006 "seek_data": false, 00:04:35.006 "copy": true, 00:04:35.006 "nvme_iov_md": false 00:04:35.006 }, 00:04:35.006 "memory_domains": [ 00:04:35.006 { 00:04:35.006 "dma_device_id": "system", 00:04:35.006 "dma_device_type": 1 00:04:35.006 }, 00:04:35.006 { 00:04:35.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.006 "dma_device_type": 2 00:04:35.006 } 00:04:35.006 ], 00:04:35.006 "driver_specific": {} 00:04:35.006 } 00:04:35.006 ]' 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 [2024-11-18 18:11:33.171817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.006 [2024-11-18 18:11:33.171873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.006 [2024-11-18 18:11:33.171934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:35.006 [2024-11-18 18:11:33.171959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.006 [2024-11-18 18:11:33.174722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.006 [2024-11-18 18:11:33.174757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.006 Passthru0 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.007 { 00:04:35.007 "name": "Malloc2", 00:04:35.007 "aliases": [ 00:04:35.007 "588bc45e-3ecb-4e2c-b03c-dd94feccaf4e" 00:04:35.007 ], 00:04:35.007 "product_name": "Malloc disk", 00:04:35.007 "block_size": 512, 00:04:35.007 "num_blocks": 16384, 00:04:35.007 "uuid": "588bc45e-3ecb-4e2c-b03c-dd94feccaf4e", 00:04:35.007 "assigned_rate_limits": { 00:04:35.007 "rw_ios_per_sec": 0, 00:04:35.007 "rw_mbytes_per_sec": 0, 00:04:35.007 "r_mbytes_per_sec": 0, 00:04:35.007 "w_mbytes_per_sec": 0 00:04:35.007 }, 00:04:35.007 "claimed": true, 00:04:35.007 "claim_type": "exclusive_write", 00:04:35.007 "zoned": false, 00:04:35.007 "supported_io_types": { 00:04:35.007 "read": true, 00:04:35.007 "write": true, 00:04:35.007 "unmap": true, 00:04:35.007 "flush": true, 00:04:35.007 "reset": true, 00:04:35.007 "nvme_admin": false, 00:04:35.007 "nvme_io": false, 00:04:35.007 "nvme_io_md": false, 00:04:35.007 "write_zeroes": true, 00:04:35.007 "zcopy": true, 00:04:35.007 "get_zone_info": false, 00:04:35.007 "zone_management": false, 00:04:35.007 "zone_append": false, 00:04:35.007 "compare": false, 00:04:35.007 "compare_and_write": false, 00:04:35.007 "abort": true, 00:04:35.007 "seek_hole": false, 00:04:35.007 "seek_data": false, 00:04:35.007 "copy": true, 00:04:35.007 "nvme_iov_md": false 00:04:35.007 }, 00:04:35.007 "memory_domains": [ 00:04:35.007 { 00:04:35.007 "dma_device_id": "system", 00:04:35.007 "dma_device_type": 1 00:04:35.007 }, 00:04:35.007 { 00:04:35.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.007 "dma_device_type": 2 00:04:35.007 } 00:04:35.007 ], 00:04:35.007 "driver_specific": {} 00:04:35.007 }, 00:04:35.007 { 00:04:35.007 "name": "Passthru0", 00:04:35.007 "aliases": [ 00:04:35.007 "99853087-bf81-534b-8201-8cb53d6da221" 00:04:35.007 ], 00:04:35.007 "product_name": "passthru", 00:04:35.007 "block_size": 512, 00:04:35.007 "num_blocks": 16384, 00:04:35.007 "uuid": "99853087-bf81-534b-8201-8cb53d6da221", 00:04:35.007 "assigned_rate_limits": { 00:04:35.007 "rw_ios_per_sec": 0, 00:04:35.007 "rw_mbytes_per_sec": 0, 00:04:35.007 "r_mbytes_per_sec": 0, 00:04:35.007 "w_mbytes_per_sec": 0 00:04:35.007 }, 00:04:35.007 "claimed": false, 00:04:35.007 "zoned": false, 00:04:35.007 "supported_io_types": { 00:04:35.007 "read": true, 00:04:35.007 "write": true, 00:04:35.007 "unmap": true, 00:04:35.007 "flush": true, 00:04:35.007 "reset": true, 00:04:35.007 "nvme_admin": false, 00:04:35.007 "nvme_io": false, 00:04:35.007 "nvme_io_md": false, 00:04:35.007 "write_zeroes": true, 00:04:35.007 "zcopy": true, 00:04:35.007 "get_zone_info": false, 00:04:35.007 "zone_management": false, 00:04:35.007 "zone_append": false, 00:04:35.007 "compare": false, 00:04:35.007 "compare_and_write": false, 00:04:35.007 "abort": true, 00:04:35.007 "seek_hole": false, 00:04:35.007 "seek_data": false, 00:04:35.007 "copy": true, 00:04:35.007 "nvme_iov_md": false 00:04:35.007 }, 00:04:35.007 "memory_domains": [ 00:04:35.007 { 00:04:35.007 "dma_device_id": "system", 00:04:35.007 "dma_device_type": 1 00:04:35.007 }, 00:04:35.007 { 00:04:35.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.007 "dma_device_type": 2 00:04:35.007 } 00:04:35.007 ], 00:04:35.007 "driver_specific": { 00:04:35.007 "passthru": { 00:04:35.007 "name": "Passthru0", 00:04:35.007 "base_bdev_name": "Malloc2" 00:04:35.007 } 00:04:35.007 } 00:04:35.007 } 00:04:35.007 ]' 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.007 00:04:35.007 real 0m0.253s 00:04:35.007 user 0m0.151s 00:04:35.007 sys 0m0.019s 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.007 18:11:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 ************************************ 00:04:35.007 END TEST rpc_daemon_integrity 00:04:35.007 ************************************ 00:04:35.007 18:11:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.007 18:11:33 rpc -- rpc/rpc.sh@84 -- # killprocess 2813964 00:04:35.007 18:11:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 2813964 ']' 00:04:35.007 18:11:33 rpc -- common/autotest_common.sh@958 -- # kill -0 2813964 00:04:35.007 18:11:33 rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.007 18:11:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.007 18:11:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813964 00:04:35.265 18:11:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.265 18:11:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.265 18:11:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813964' 00:04:35.265 killing process with pid 2813964 00:04:35.265 18:11:33 rpc -- common/autotest_common.sh@973 -- # kill 2813964 00:04:35.265 18:11:33 rpc -- common/autotest_common.sh@978 -- # wait 2813964 00:04:37.857 00:04:37.857 real 0m4.888s 00:04:37.857 user 0m5.451s 00:04:37.857 sys 0m0.815s 00:04:37.857 18:11:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.857 18:11:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.857 ************************************ 00:04:37.857 END TEST rpc 00:04:37.857 ************************************ 00:04:37.857 18:11:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.857 18:11:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.857 18:11:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.857 18:11:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.857 ************************************ 00:04:37.857 START TEST skip_rpc 00:04:37.857 ************************************ 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.857 * Looking for test storage... 00:04:37.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.857 18:11:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.857 --rc genhtml_branch_coverage=1 00:04:37.857 --rc genhtml_function_coverage=1 00:04:37.857 --rc genhtml_legend=1 00:04:37.857 --rc geninfo_all_blocks=1 00:04:37.857 --rc geninfo_unexecuted_blocks=1 00:04:37.857 00:04:37.857 ' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.857 --rc genhtml_branch_coverage=1 00:04:37.857 --rc genhtml_function_coverage=1 00:04:37.857 --rc genhtml_legend=1 00:04:37.857 --rc geninfo_all_blocks=1 00:04:37.857 --rc geninfo_unexecuted_blocks=1 00:04:37.857 00:04:37.857 ' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.857 --rc genhtml_branch_coverage=1 00:04:37.857 --rc genhtml_function_coverage=1 00:04:37.857 --rc genhtml_legend=1 00:04:37.857 --rc geninfo_all_blocks=1 00:04:37.857 --rc geninfo_unexecuted_blocks=1 00:04:37.857 00:04:37.857 ' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.857 --rc genhtml_branch_coverage=1 00:04:37.857 --rc genhtml_function_coverage=1 00:04:37.857 --rc genhtml_legend=1 00:04:37.857 --rc geninfo_all_blocks=1 00:04:37.857 --rc geninfo_unexecuted_blocks=1 00:04:37.857 00:04:37.857 ' 00:04:37.857 18:11:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.857 18:11:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.857 18:11:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.857 18:11:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.857 ************************************ 00:04:37.857 START TEST skip_rpc 00:04:37.857 ************************************ 00:04:37.857 18:11:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:37.857 18:11:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2814693 00:04:37.857 18:11:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.857 18:11:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.857 18:11:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.857 [2024-11-18 18:11:36.099395] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:37.857 [2024-11-18 18:11:36.099561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814693 ] 00:04:38.117 [2024-11-18 18:11:36.259048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.117 [2024-11-18 18:11:36.398448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2814693 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2814693 ']' 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2814693 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.394 18:11:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814693 00:04:43.394 18:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.394 18:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.394 18:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814693' 00:04:43.394 killing process with pid 2814693 00:04:43.394 18:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2814693 00:04:43.394 18:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2814693 00:04:45.298 00:04:45.298 real 0m7.453s 00:04:45.298 user 0m6.927s 00:04:45.298 sys 0m0.522s 00:04:45.298 18:11:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.298 18:11:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.298 ************************************ 00:04:45.298 END TEST skip_rpc 00:04:45.298 ************************************ 00:04:45.298 18:11:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:45.298 18:11:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.298 18:11:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.298 18:11:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.298 ************************************ 00:04:45.298 START TEST skip_rpc_with_json 00:04:45.298 ************************************ 00:04:45.298 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:45.298 18:11:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:45.298 18:11:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2815646 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2815646 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2815646 ']' 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.299 18:11:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.299 [2024-11-18 18:11:43.592643] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:45.299 [2024-11-18 18:11:43.592798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815646 ] 00:04:45.559 [2024-11-18 18:11:43.736234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.559 [2024-11-18 18:11:43.875018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 [2024-11-18 18:11:44.847534] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:46.936 request: 00:04:46.936 { 00:04:46.936 "trtype": "tcp", 00:04:46.936 "method": "nvmf_get_transports", 00:04:46.936 "req_id": 1 00:04:46.936 } 00:04:46.936 Got JSON-RPC error response 00:04:46.936 response: 00:04:46.936 { 00:04:46.936 "code": -19, 00:04:46.936 "message": "No such device" 00:04:46.936 } 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 [2024-11-18 18:11:44.855714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.936 18:11:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.936 18:11:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.936 { 00:04:46.936 "subsystems": [ 00:04:46.936 { 00:04:46.936 "subsystem": "fsdev", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "fsdev_set_opts", 00:04:46.936 "params": { 00:04:46.936 "fsdev_io_pool_size": 65535, 00:04:46.936 "fsdev_io_cache_size": 256 00:04:46.936 } 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "keyring", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "iobuf", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "iobuf_set_options", 00:04:46.936 "params": { 00:04:46.936 "small_pool_count": 8192, 00:04:46.936 "large_pool_count": 1024, 00:04:46.936 "small_bufsize": 8192, 00:04:46.936 "large_bufsize": 135168, 00:04:46.936 "enable_numa": false 00:04:46.936 } 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "sock", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "sock_set_default_impl", 00:04:46.936 "params": { 00:04:46.936 "impl_name": "posix" 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "sock_impl_set_options", 00:04:46.936 "params": { 00:04:46.936 "impl_name": "ssl", 00:04:46.936 "recv_buf_size": 4096, 00:04:46.936 "send_buf_size": 4096, 00:04:46.936 "enable_recv_pipe": true, 00:04:46.936 "enable_quickack": false, 00:04:46.936 "enable_placement_id": 0, 00:04:46.936 "enable_zerocopy_send_server": true, 00:04:46.936 "enable_zerocopy_send_client": false, 00:04:46.936 "zerocopy_threshold": 0, 00:04:46.936 "tls_version": 0, 00:04:46.936 "enable_ktls": false 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "sock_impl_set_options", 00:04:46.936 "params": { 00:04:46.936 "impl_name": "posix", 00:04:46.936 "recv_buf_size": 2097152, 00:04:46.936 "send_buf_size": 2097152, 00:04:46.936 "enable_recv_pipe": true, 00:04:46.936 "enable_quickack": false, 00:04:46.936 "enable_placement_id": 0, 00:04:46.936 "enable_zerocopy_send_server": true, 00:04:46.936 "enable_zerocopy_send_client": false, 00:04:46.936 "zerocopy_threshold": 0, 00:04:46.936 "tls_version": 0, 00:04:46.936 "enable_ktls": false 00:04:46.936 } 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "vmd", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "accel", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "accel_set_options", 00:04:46.936 "params": { 00:04:46.936 "small_cache_size": 128, 00:04:46.936 "large_cache_size": 16, 00:04:46.936 "task_count": 2048, 00:04:46.936 "sequence_count": 2048, 00:04:46.936 "buf_count": 2048 00:04:46.936 } 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "bdev", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "bdev_set_options", 00:04:46.936 "params": { 00:04:46.936 "bdev_io_pool_size": 65535, 00:04:46.936 "bdev_io_cache_size": 256, 00:04:46.936 "bdev_auto_examine": true, 00:04:46.936 "iobuf_small_cache_size": 128, 00:04:46.936 "iobuf_large_cache_size": 16 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "bdev_raid_set_options", 00:04:46.936 "params": { 00:04:46.936 "process_window_size_kb": 1024, 00:04:46.936 "process_max_bandwidth_mb_sec": 0 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "bdev_iscsi_set_options", 00:04:46.936 "params": { 00:04:46.936 "timeout_sec": 30 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "bdev_nvme_set_options", 00:04:46.936 "params": { 00:04:46.936 "action_on_timeout": "none", 00:04:46.936 "timeout_us": 0, 00:04:46.936 "timeout_admin_us": 0, 00:04:46.936 "keep_alive_timeout_ms": 10000, 00:04:46.936 "arbitration_burst": 0, 00:04:46.936 "low_priority_weight": 0, 00:04:46.936 "medium_priority_weight": 0, 00:04:46.936 "high_priority_weight": 0, 00:04:46.936 "nvme_adminq_poll_period_us": 10000, 00:04:46.936 "nvme_ioq_poll_period_us": 0, 00:04:46.936 "io_queue_requests": 0, 00:04:46.936 "delay_cmd_submit": true, 00:04:46.936 "transport_retry_count": 4, 00:04:46.936 "bdev_retry_count": 3, 00:04:46.936 "transport_ack_timeout": 0, 00:04:46.936 "ctrlr_loss_timeout_sec": 0, 00:04:46.936 "reconnect_delay_sec": 0, 00:04:46.936 "fast_io_fail_timeout_sec": 0, 00:04:46.936 "disable_auto_failback": false, 00:04:46.936 "generate_uuids": false, 00:04:46.936 "transport_tos": 0, 00:04:46.936 "nvme_error_stat": false, 00:04:46.936 "rdma_srq_size": 0, 00:04:46.936 "io_path_stat": false, 00:04:46.936 "allow_accel_sequence": false, 00:04:46.936 "rdma_max_cq_size": 0, 00:04:46.936 "rdma_cm_event_timeout_ms": 0, 00:04:46.936 "dhchap_digests": [ 00:04:46.936 "sha256", 00:04:46.936 "sha384", 00:04:46.936 "sha512" 00:04:46.936 ], 00:04:46.936 "dhchap_dhgroups": [ 00:04:46.936 "null", 00:04:46.936 "ffdhe2048", 00:04:46.936 "ffdhe3072", 00:04:46.936 "ffdhe4096", 00:04:46.936 "ffdhe6144", 00:04:46.936 "ffdhe8192" 00:04:46.936 ] 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "bdev_nvme_set_hotplug", 00:04:46.936 "params": { 00:04:46.936 "period_us": 100000, 00:04:46.936 "enable": false 00:04:46.936 } 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "method": "bdev_wait_for_examine" 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "scsi", 00:04:46.936 "config": null 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "scheduler", 00:04:46.936 "config": [ 00:04:46.936 { 00:04:46.936 "method": "framework_set_scheduler", 00:04:46.936 "params": { 00:04:46.936 "name": "static" 00:04:46.936 } 00:04:46.936 } 00:04:46.936 ] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "vhost_scsi", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "vhost_blk", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "ublk", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.936 "subsystem": "nbd", 00:04:46.936 "config": [] 00:04:46.936 }, 00:04:46.936 { 00:04:46.937 "subsystem": "nvmf", 00:04:46.937 "config": [ 00:04:46.937 { 00:04:46.937 "method": "nvmf_set_config", 00:04:46.937 "params": { 00:04:46.937 "discovery_filter": "match_any", 00:04:46.937 "admin_cmd_passthru": { 00:04:46.937 "identify_ctrlr": false 00:04:46.937 }, 00:04:46.937 "dhchap_digests": [ 00:04:46.937 "sha256", 00:04:46.937 "sha384", 00:04:46.937 "sha512" 00:04:46.937 ], 00:04:46.937 "dhchap_dhgroups": [ 00:04:46.937 "null", 00:04:46.937 "ffdhe2048", 00:04:46.937 "ffdhe3072", 00:04:46.937 "ffdhe4096", 00:04:46.937 "ffdhe6144", 00:04:46.937 "ffdhe8192" 00:04:46.937 ] 00:04:46.937 } 00:04:46.937 }, 00:04:46.937 { 00:04:46.937 "method": "nvmf_set_max_subsystems", 00:04:46.937 "params": { 00:04:46.937 "max_subsystems": 1024 00:04:46.937 } 00:04:46.937 }, 00:04:46.937 { 00:04:46.937 "method": "nvmf_set_crdt", 00:04:46.937 "params": { 00:04:46.937 "crdt1": 0, 00:04:46.937 "crdt2": 0, 00:04:46.937 "crdt3": 0 00:04:46.937 } 00:04:46.937 }, 00:04:46.937 { 00:04:46.937 "method": "nvmf_create_transport", 00:04:46.937 "params": { 00:04:46.937 "trtype": "TCP", 00:04:46.937 "max_queue_depth": 128, 00:04:46.937 "max_io_qpairs_per_ctrlr": 127, 00:04:46.937 "in_capsule_data_size": 4096, 00:04:46.937 "max_io_size": 131072, 00:04:46.937 "io_unit_size": 131072, 00:04:46.937 "max_aq_depth": 128, 00:04:46.937 "num_shared_buffers": 511, 00:04:46.937 "buf_cache_size": 4294967295, 00:04:46.937 "dif_insert_or_strip": false, 00:04:46.937 "zcopy": false, 00:04:46.937 "c2h_success": true, 00:04:46.937 "sock_priority": 0, 00:04:46.937 "abort_timeout_sec": 1, 00:04:46.937 "ack_timeout": 0, 00:04:46.937 "data_wr_pool_size": 0 00:04:46.937 } 00:04:46.937 } 00:04:46.937 ] 00:04:46.937 }, 00:04:46.937 { 00:04:46.937 "subsystem": "iscsi", 00:04:46.937 "config": [ 00:04:46.937 { 00:04:46.937 "method": "iscsi_set_options", 00:04:46.937 "params": { 00:04:46.937 "node_base": "iqn.2016-06.io.spdk", 00:04:46.937 "max_sessions": 128, 00:04:46.937 "max_connections_per_session": 2, 00:04:46.937 "max_queue_depth": 64, 00:04:46.937 "default_time2wait": 2, 00:04:46.937 "default_time2retain": 20, 00:04:46.937 "first_burst_length": 8192, 00:04:46.937 "immediate_data": true, 00:04:46.937 "allow_duplicated_isid": false, 00:04:46.937 "error_recovery_level": 0, 00:04:46.937 "nop_timeout": 60, 00:04:46.937 "nop_in_interval": 30, 00:04:46.937 "disable_chap": false, 00:04:46.937 "require_chap": false, 00:04:46.937 "mutual_chap": false, 00:04:46.937 "chap_group": 0, 00:04:46.937 "max_large_datain_per_connection": 64, 00:04:46.937 "max_r2t_per_connection": 4, 00:04:46.937 "pdu_pool_size": 36864, 00:04:46.937 "immediate_data_pool_size": 16384, 00:04:46.937 "data_out_pool_size": 2048 00:04:46.937 } 00:04:46.937 } 00:04:46.937 ] 00:04:46.937 } 00:04:46.937 ] 00:04:46.937 } 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2815646 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2815646 ']' 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2815646 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815646 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815646' 00:04:46.937 killing process with pid 2815646 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2815646 00:04:46.937 18:11:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2815646 00:04:49.476 18:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2816059 00:04:49.476 18:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.476 18:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2816059 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2816059 ']' 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2816059 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816059 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816059' 00:04:54.756 killing process with pid 2816059 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2816059 00:04:54.756 18:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2816059 00:04:56.664 18:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.664 18:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.664 00:04:56.664 real 0m11.427s 00:04:56.664 user 0m10.904s 00:04:56.664 sys 0m1.148s 00:04:56.664 18:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 ************************************ 00:04:56.665 END TEST skip_rpc_with_json 00:04:56.665 ************************************ 00:04:56.665 18:11:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:56.665 18:11:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.665 18:11:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.665 18:11:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 ************************************ 00:04:56.665 START TEST skip_rpc_with_delay 00:04:56.665 ************************************ 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:56.665 18:11:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.924 [2024-11-18 18:11:55.063051] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.924 00:04:56.924 real 0m0.149s 00:04:56.924 user 0m0.076s 00:04:56.924 sys 0m0.072s 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.924 18:11:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:56.924 ************************************ 00:04:56.924 END TEST skip_rpc_with_delay 00:04:56.924 ************************************ 00:04:56.924 18:11:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:56.924 18:11:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:56.924 18:11:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:56.924 18:11:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.924 18:11:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.924 18:11:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.924 ************************************ 00:04:56.924 START TEST exit_on_failed_rpc_init 00:04:56.924 ************************************ 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2817041 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2817041 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2817041 ']' 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.924 18:11:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.924 [2024-11-18 18:11:55.257539] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:56.924 [2024-11-18 18:11:55.257708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817041 ] 00:04:57.182 [2024-11-18 18:11:55.397404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.442 [2024-11-18 18:11:55.534185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:58.380 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.381 18:11:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.381 [2024-11-18 18:11:56.559776] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:58.381 [2024-11-18 18:11:56.559939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817186 ] 00:04:58.381 [2024-11-18 18:11:56.701827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.640 [2024-11-18 18:11:56.838662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.640 [2024-11-18 18:11:56.838813] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.640 [2024-11-18 18:11:56.838865] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.640 [2024-11-18 18:11:56.838884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2817041 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2817041 ']' 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2817041 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817041 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817041' 00:04:58.900 killing process with pid 2817041 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2817041 00:04:58.900 18:11:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2817041 00:05:01.436 00:05:01.436 real 0m4.388s 00:05:01.436 user 0m4.846s 00:05:01.436 sys 0m0.740s 00:05:01.436 18:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.436 18:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.436 ************************************ 00:05:01.436 END TEST exit_on_failed_rpc_init 00:05:01.436 ************************************ 00:05:01.436 18:11:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.436 00:05:01.436 real 0m23.747s 00:05:01.436 user 0m22.925s 00:05:01.436 sys 0m2.659s 00:05:01.436 18:11:59 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.436 18:11:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.436 ************************************ 00:05:01.436 END TEST skip_rpc 00:05:01.436 ************************************ 00:05:01.436 18:11:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:01.436 18:11:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.436 18:11:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.436 18:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.436 ************************************ 00:05:01.436 START TEST rpc_client 00:05:01.436 ************************************ 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:01.436 * Looking for test storage... 00:05:01.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.436 18:11:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.436 --rc genhtml_branch_coverage=1 00:05:01.436 --rc genhtml_function_coverage=1 00:05:01.436 --rc genhtml_legend=1 00:05:01.436 --rc geninfo_all_blocks=1 00:05:01.436 --rc geninfo_unexecuted_blocks=1 00:05:01.436 00:05:01.436 ' 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.436 --rc genhtml_branch_coverage=1 00:05:01.436 --rc genhtml_function_coverage=1 00:05:01.436 --rc genhtml_legend=1 00:05:01.436 --rc geninfo_all_blocks=1 00:05:01.436 --rc geninfo_unexecuted_blocks=1 00:05:01.436 00:05:01.436 ' 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.436 --rc genhtml_branch_coverage=1 00:05:01.436 --rc genhtml_function_coverage=1 00:05:01.436 --rc genhtml_legend=1 00:05:01.436 --rc geninfo_all_blocks=1 00:05:01.436 --rc geninfo_unexecuted_blocks=1 00:05:01.436 00:05:01.436 ' 00:05:01.436 18:11:59 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.436 --rc genhtml_branch_coverage=1 00:05:01.436 --rc genhtml_function_coverage=1 00:05:01.436 --rc genhtml_legend=1 00:05:01.436 --rc geninfo_all_blocks=1 00:05:01.436 --rc geninfo_unexecuted_blocks=1 00:05:01.436 00:05:01.436 ' 00:05:01.436 18:11:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:01.696 OK 00:05:01.696 18:11:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.696 00:05:01.696 real 0m0.187s 00:05:01.696 user 0m0.114s 00:05:01.696 sys 0m0.083s 00:05:01.696 18:11:59 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.696 18:11:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.696 ************************************ 00:05:01.696 END TEST rpc_client 00:05:01.696 ************************************ 00:05:01.696 18:11:59 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.696 18:11:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.696 18:11:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.696 18:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.696 ************************************ 00:05:01.696 START TEST json_config 00:05:01.696 ************************************ 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.696 18:11:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.696 18:11:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.696 18:11:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.696 18:11:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.696 18:11:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.696 18:11:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:01.696 18:11:59 json_config -- scripts/common.sh@345 -- # : 1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.696 18:11:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.696 18:11:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@353 -- # local d=1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.696 18:11:59 json_config -- scripts/common.sh@355 -- # echo 1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.696 18:11:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@353 -- # local d=2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.696 18:11:59 json_config -- scripts/common.sh@355 -- # echo 2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.696 18:11:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.696 18:11:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.696 18:11:59 json_config -- scripts/common.sh@368 -- # return 0 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.696 18:11:59 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.696 --rc genhtml_branch_coverage=1 00:05:01.696 --rc genhtml_function_coverage=1 00:05:01.696 --rc genhtml_legend=1 00:05:01.696 --rc geninfo_all_blocks=1 00:05:01.696 --rc geninfo_unexecuted_blocks=1 00:05:01.697 00:05:01.697 ' 00:05:01.697 18:11:59 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.697 --rc genhtml_branch_coverage=1 00:05:01.697 --rc genhtml_function_coverage=1 00:05:01.697 --rc genhtml_legend=1 00:05:01.697 --rc geninfo_all_blocks=1 00:05:01.697 --rc geninfo_unexecuted_blocks=1 00:05:01.697 00:05:01.697 ' 00:05:01.697 18:11:59 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.697 --rc genhtml_branch_coverage=1 00:05:01.697 --rc genhtml_function_coverage=1 00:05:01.697 --rc genhtml_legend=1 00:05:01.697 --rc geninfo_all_blocks=1 00:05:01.697 --rc geninfo_unexecuted_blocks=1 00:05:01.697 00:05:01.697 ' 00:05:01.697 18:11:59 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.697 --rc genhtml_branch_coverage=1 00:05:01.697 --rc genhtml_function_coverage=1 00:05:01.697 --rc genhtml_legend=1 00:05:01.697 --rc geninfo_all_blocks=1 00:05:01.697 --rc geninfo_unexecuted_blocks=1 00:05:01.697 00:05:01.697 ' 00:05:01.697 18:11:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.697 18:11:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.697 18:11:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.697 18:11:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.697 18:11:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.697 18:11:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.697 18:11:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.697 18:11:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.697 18:11:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.697 18:11:59 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.697 18:11:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.697 18:12:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:01.697 INFO: JSON configuration test init 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.697 18:12:00 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.697 18:12:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.697 18:12:00 json_config -- json_config/common.sh@10 -- # shift 00:05:01.697 18:12:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.697 18:12:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.697 18:12:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.697 18:12:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.697 18:12:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.697 18:12:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2817736 00:05:01.697 18:12:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.697 18:12:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.697 Waiting for target to run... 00:05:01.697 18:12:00 json_config -- json_config/common.sh@25 -- # waitforlisten 2817736 /var/tmp/spdk_tgt.sock 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 2817736 ']' 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.697 18:12:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.956 [2024-11-18 18:12:00.126804] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:01.956 [2024-11-18 18:12:00.127003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817736 ] 00:05:02.522 [2024-11-18 18:12:00.580757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.522 [2024-11-18 18:12:00.702618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:02.781 18:12:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.781 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.781 18:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.781 18:12:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:02.781 18:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:06.964 18:12:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.964 18:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:06.964 18:12:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:06.964 18:12:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@54 -- # sort 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:07.223 18:12:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.223 18:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:07.223 18:12:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.223 18:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:07.223 18:12:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.223 18:12:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.481 MallocForNvmf0 00:05:07.481 18:12:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.481 18:12:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.740 MallocForNvmf1 00:05:07.740 18:12:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:07.740 18:12:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:07.998 [2024-11-18 18:12:06.154847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.998 18:12:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.998 18:12:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.255 18:12:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.255 18:12:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.513 18:12:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.513 18:12:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.801 18:12:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:08.801 18:12:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.059 [2024-11-18 18:12:07.262721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.059 18:12:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:09.059 18:12:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.059 18:12:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.059 18:12:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:09.059 18:12:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.059 18:12:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.059 18:12:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:09.059 18:12:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.059 18:12:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.317 MallocBdevForConfigChangeCheck 00:05:09.317 18:12:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:09.317 18:12:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.317 18:12:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.317 18:12:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:09.317 18:12:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.884 18:12:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:09.884 INFO: shutting down applications... 00:05:09.884 18:12:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:09.884 18:12:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:09.884 18:12:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:09.884 18:12:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.781 Calling clear_iscsi_subsystem 00:05:11.781 Calling clear_nvmf_subsystem 00:05:11.781 Calling clear_nbd_subsystem 00:05:11.781 Calling clear_ublk_subsystem 00:05:11.781 Calling clear_vhost_blk_subsystem 00:05:11.781 Calling clear_vhost_scsi_subsystem 00:05:11.781 Calling clear_bdev_subsystem 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.781 18:12:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.039 18:12:10 json_config -- json_config/json_config.sh@352 -- # break 00:05:12.040 18:12:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:12.040 18:12:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:12.040 18:12:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.040 18:12:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.040 18:12:10 json_config -- json_config/common.sh@35 -- # [[ -n 2817736 ]] 00:05:12.040 18:12:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2817736 00:05:12.040 18:12:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.040 18:12:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.040 18:12:10 json_config -- json_config/common.sh@41 -- # kill -0 2817736 00:05:12.040 18:12:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.605 18:12:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.605 18:12:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.605 18:12:10 json_config -- json_config/common.sh@41 -- # kill -0 2817736 00:05:12.605 18:12:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.864 18:12:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.864 18:12:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.864 18:12:11 json_config -- json_config/common.sh@41 -- # kill -0 2817736 00:05:12.864 18:12:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.864 18:12:11 json_config -- json_config/common.sh@43 -- # break 00:05:12.864 18:12:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.864 18:12:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.864 SPDK target shutdown done 00:05:12.864 18:12:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:12.864 INFO: relaunching applications... 00:05:12.864 18:12:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.864 18:12:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.864 18:12:11 json_config -- json_config/common.sh@10 -- # shift 00:05:12.864 18:12:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.864 18:12:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.864 18:12:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.864 18:12:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.864 18:12:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.864 18:12:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2819805 00:05:12.864 18:12:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.864 18:12:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.864 Waiting for target to run... 00:05:12.864 18:12:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2819805 /var/tmp/spdk_tgt.sock 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 2819805 ']' 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.864 18:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.122 [2024-11-18 18:12:11.261223] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:13.122 [2024-11-18 18:12:11.261372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819805 ] 00:05:13.688 [2024-11-18 18:12:11.877998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.688 [2024-11-18 18:12:12.007804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.871 [2024-11-18 18:12:15.794447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.871 [2024-11-18 18:12:15.826982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.871 18:12:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.871 18:12:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:17.871 18:12:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.871 00:05:17.871 18:12:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:17.871 18:12:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:17.871 INFO: Checking if target configuration is the same... 00:05:17.871 18:12:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.871 18:12:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:17.871 18:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.871 + '[' 2 -ne 2 ']' 00:05:17.871 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:17.871 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:17.871 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.871 +++ basename /dev/fd/62 00:05:17.871 ++ mktemp /tmp/62.XXX 00:05:17.871 + tmp_file_1=/tmp/62.zaT 00:05:17.871 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.871 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.871 + tmp_file_2=/tmp/spdk_tgt_config.json.KfR 00:05:17.871 + ret=0 00:05:17.871 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.128 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.128 + diff -u /tmp/62.zaT /tmp/spdk_tgt_config.json.KfR 00:05:18.128 + echo 'INFO: JSON config files are the same' 00:05:18.128 INFO: JSON config files are the same 00:05:18.128 + rm /tmp/62.zaT /tmp/spdk_tgt_config.json.KfR 00:05:18.128 + exit 0 00:05:18.128 18:12:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:18.128 18:12:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.128 INFO: changing configuration and checking if this can be detected... 00:05:18.128 18:12:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.128 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.386 18:12:16 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.386 18:12:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:18.386 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.386 + '[' 2 -ne 2 ']' 00:05:18.386 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.386 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.386 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.386 +++ basename /dev/fd/62 00:05:18.386 ++ mktemp /tmp/62.XXX 00:05:18.386 + tmp_file_1=/tmp/62.Iov 00:05:18.386 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.386 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.386 + tmp_file_2=/tmp/spdk_tgt_config.json.2c8 00:05:18.386 + ret=0 00:05:18.386 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.005 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.005 + diff -u /tmp/62.Iov /tmp/spdk_tgt_config.json.2c8 00:05:19.005 + ret=1 00:05:19.005 + echo '=== Start of file: /tmp/62.Iov ===' 00:05:19.005 + cat /tmp/62.Iov 00:05:19.005 + echo '=== End of file: /tmp/62.Iov ===' 00:05:19.005 + echo '' 00:05:19.005 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2c8 ===' 00:05:19.005 + cat /tmp/spdk_tgt_config.json.2c8 00:05:19.005 + echo '=== End of file: /tmp/spdk_tgt_config.json.2c8 ===' 00:05:19.005 + echo '' 00:05:19.005 + rm /tmp/62.Iov /tmp/spdk_tgt_config.json.2c8 00:05:19.005 + exit 1 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:19.005 INFO: configuration change detected. 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 2819805 ]] 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.005 18:12:17 json_config -- json_config/json_config.sh@330 -- # killprocess 2819805 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@954 -- # '[' -z 2819805 ']' 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@958 -- # kill -0 2819805 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@959 -- # uname 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819805 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819805' 00:05:19.005 killing process with pid 2819805 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@973 -- # kill 2819805 00:05:19.005 18:12:17 json_config -- common/autotest_common.sh@978 -- # wait 2819805 00:05:21.559 18:12:19 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.559 18:12:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:21.559 18:12:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.559 18:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.559 18:12:19 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:21.559 18:12:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:21.559 INFO: Success 00:05:21.559 00:05:21.559 real 0m19.694s 00:05:21.559 user 0m21.405s 00:05:21.559 sys 0m3.133s 00:05:21.559 18:12:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.559 18:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.559 ************************************ 00:05:21.559 END TEST json_config 00:05:21.559 ************************************ 00:05:21.559 18:12:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.559 18:12:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.559 18:12:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.559 18:12:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.559 ************************************ 00:05:21.559 START TEST json_config_extra_key 00:05:21.559 ************************************ 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.559 --rc genhtml_branch_coverage=1 00:05:21.559 --rc genhtml_function_coverage=1 00:05:21.559 --rc genhtml_legend=1 00:05:21.559 --rc geninfo_all_blocks=1 00:05:21.559 --rc geninfo_unexecuted_blocks=1 00:05:21.559 00:05:21.559 ' 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.559 --rc genhtml_branch_coverage=1 00:05:21.559 --rc genhtml_function_coverage=1 00:05:21.559 --rc genhtml_legend=1 00:05:21.559 --rc geninfo_all_blocks=1 00:05:21.559 --rc geninfo_unexecuted_blocks=1 00:05:21.559 00:05:21.559 ' 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.559 --rc genhtml_branch_coverage=1 00:05:21.559 --rc genhtml_function_coverage=1 00:05:21.559 --rc genhtml_legend=1 00:05:21.559 --rc geninfo_all_blocks=1 00:05:21.559 --rc geninfo_unexecuted_blocks=1 00:05:21.559 00:05:21.559 ' 00:05:21.559 18:12:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.559 --rc genhtml_branch_coverage=1 00:05:21.559 --rc genhtml_function_coverage=1 00:05:21.559 --rc genhtml_legend=1 00:05:21.559 --rc geninfo_all_blocks=1 00:05:21.559 --rc geninfo_unexecuted_blocks=1 00:05:21.559 00:05:21.559 ' 00:05:21.559 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.559 18:12:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.559 18:12:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.559 18:12:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.559 18:12:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.559 18:12:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.559 18:12:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.559 18:12:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.560 18:12:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.560 18:12:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.560 18:12:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.560 INFO: launching applications... 00:05:21.560 18:12:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2820995 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.560 Waiting for target to run... 00:05:21.560 18:12:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2820995 /var/tmp/spdk_tgt.sock 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2820995 ']' 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.560 18:12:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.560 [2024-11-18 18:12:19.843734] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:21.560 [2024-11-18 18:12:19.843908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820995 ] 00:05:22.126 [2024-11-18 18:12:20.311290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.126 [2024-11-18 18:12:20.435603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.060 18:12:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.060 18:12:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:23.060 00:05:23.060 18:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:23.060 INFO: shutting down applications... 00:05:23.060 18:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2820995 ]] 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2820995 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:23.060 18:12:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.319 18:12:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.319 18:12:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.319 18:12:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:23.577 18:12:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.836 18:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.836 18:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.836 18:12:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:23.836 18:12:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.402 18:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.402 18:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.402 18:12:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:24.402 18:12:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.967 18:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.967 18:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.967 18:12:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:24.967 18:12:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.534 18:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.534 18:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.534 18:12:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:25.534 18:12:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2820995 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.102 18:12:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.102 SPDK target shutdown done 00:05:26.102 18:12:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.102 Success 00:05:26.102 00:05:26.102 real 0m4.570s 00:05:26.102 user 0m4.212s 00:05:26.102 sys 0m0.681s 00:05:26.102 18:12:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.102 18:12:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.102 ************************************ 00:05:26.102 END TEST json_config_extra_key 00:05:26.102 ************************************ 00:05:26.102 18:12:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.102 18:12:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.102 18:12:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.102 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.102 ************************************ 00:05:26.102 START TEST alias_rpc 00:05:26.102 ************************************ 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.102 * Looking for test storage... 00:05:26.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.102 18:12:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.102 --rc genhtml_branch_coverage=1 00:05:26.102 --rc genhtml_function_coverage=1 00:05:26.102 --rc genhtml_legend=1 00:05:26.102 --rc geninfo_all_blocks=1 00:05:26.102 --rc geninfo_unexecuted_blocks=1 00:05:26.102 00:05:26.102 ' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.102 --rc genhtml_branch_coverage=1 00:05:26.102 --rc genhtml_function_coverage=1 00:05:26.102 --rc genhtml_legend=1 00:05:26.102 --rc geninfo_all_blocks=1 00:05:26.102 --rc geninfo_unexecuted_blocks=1 00:05:26.102 00:05:26.102 ' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.102 --rc genhtml_branch_coverage=1 00:05:26.102 --rc genhtml_function_coverage=1 00:05:26.102 --rc genhtml_legend=1 00:05:26.102 --rc geninfo_all_blocks=1 00:05:26.102 --rc geninfo_unexecuted_blocks=1 00:05:26.102 00:05:26.102 ' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.102 --rc genhtml_branch_coverage=1 00:05:26.102 --rc genhtml_function_coverage=1 00:05:26.102 --rc genhtml_legend=1 00:05:26.102 --rc geninfo_all_blocks=1 00:05:26.102 --rc geninfo_unexecuted_blocks=1 00:05:26.102 00:05:26.102 ' 00:05:26.102 18:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.102 18:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2821587 00:05:26.102 18:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.102 18:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2821587 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2821587 ']' 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.102 18:12:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.361 [2024-11-18 18:12:24.463288] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:26.361 [2024-11-18 18:12:24.463438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821587 ] 00:05:26.361 [2024-11-18 18:12:24.605540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.619 [2024-11-18 18:12:24.743455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.554 18:12:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.554 18:12:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.554 18:12:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:27.812 18:12:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2821587 00:05:27.812 18:12:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2821587 ']' 00:05:27.812 18:12:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2821587 00:05:27.812 18:12:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.812 18:12:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.812 18:12:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821587 00:05:27.812 18:12:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.812 18:12:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.812 18:12:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821587' 00:05:27.812 killing process with pid 2821587 00:05:27.812 18:12:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 2821587 00:05:27.812 18:12:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 2821587 00:05:30.341 00:05:30.341 real 0m4.244s 00:05:30.341 user 0m4.370s 00:05:30.341 sys 0m0.660s 00:05:30.341 18:12:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.341 18:12:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.341 ************************************ 00:05:30.341 END TEST alias_rpc 00:05:30.341 ************************************ 00:05:30.341 18:12:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:30.341 18:12:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.341 18:12:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.341 18:12:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.341 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.341 ************************************ 00:05:30.341 START TEST spdkcli_tcp 00:05:30.341 ************************************ 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.341 * Looking for test storage... 00:05:30.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.341 18:12:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.341 --rc genhtml_branch_coverage=1 00:05:30.341 --rc genhtml_function_coverage=1 00:05:30.341 --rc genhtml_legend=1 00:05:30.341 --rc geninfo_all_blocks=1 00:05:30.341 --rc geninfo_unexecuted_blocks=1 00:05:30.341 00:05:30.341 ' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.341 --rc genhtml_branch_coverage=1 00:05:30.341 --rc genhtml_function_coverage=1 00:05:30.341 --rc genhtml_legend=1 00:05:30.341 --rc geninfo_all_blocks=1 00:05:30.341 --rc geninfo_unexecuted_blocks=1 00:05:30.341 00:05:30.341 ' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.341 --rc genhtml_branch_coverage=1 00:05:30.341 --rc genhtml_function_coverage=1 00:05:30.341 --rc genhtml_legend=1 00:05:30.341 --rc geninfo_all_blocks=1 00:05:30.341 --rc geninfo_unexecuted_blocks=1 00:05:30.341 00:05:30.341 ' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.341 --rc genhtml_branch_coverage=1 00:05:30.341 --rc genhtml_function_coverage=1 00:05:30.341 --rc genhtml_legend=1 00:05:30.341 --rc geninfo_all_blocks=1 00:05:30.341 --rc geninfo_unexecuted_blocks=1 00:05:30.341 00:05:30.341 ' 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2822185 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:30.341 18:12:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2822185 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2822185 ']' 00:05:30.341 18:12:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.342 18:12:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.342 18:12:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.342 18:12:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.342 18:12:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.600 [2024-11-18 18:12:28.781410] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:30.600 [2024-11-18 18:12:28.781592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822185 ] 00:05:30.858 [2024-11-18 18:12:28.939064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.858 [2024-11-18 18:12:29.078833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.858 [2024-11-18 18:12:29.078835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.792 18:12:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.792 18:12:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:31.792 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2822324 00:05:31.792 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:31.792 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.050 [ 00:05:32.050 "bdev_malloc_delete", 00:05:32.050 "bdev_malloc_create", 00:05:32.050 "bdev_null_resize", 00:05:32.050 "bdev_null_delete", 00:05:32.050 "bdev_null_create", 00:05:32.050 "bdev_nvme_cuse_unregister", 00:05:32.050 "bdev_nvme_cuse_register", 00:05:32.050 "bdev_opal_new_user", 00:05:32.050 "bdev_opal_set_lock_state", 00:05:32.050 "bdev_opal_delete", 00:05:32.050 "bdev_opal_get_info", 00:05:32.050 "bdev_opal_create", 00:05:32.050 "bdev_nvme_opal_revert", 00:05:32.050 "bdev_nvme_opal_init", 00:05:32.050 "bdev_nvme_send_cmd", 00:05:32.050 "bdev_nvme_set_keys", 00:05:32.050 "bdev_nvme_get_path_iostat", 00:05:32.050 "bdev_nvme_get_mdns_discovery_info", 00:05:32.050 "bdev_nvme_stop_mdns_discovery", 00:05:32.050 "bdev_nvme_start_mdns_discovery", 00:05:32.050 "bdev_nvme_set_multipath_policy", 00:05:32.050 "bdev_nvme_set_preferred_path", 00:05:32.050 "bdev_nvme_get_io_paths", 00:05:32.050 "bdev_nvme_remove_error_injection", 00:05:32.050 "bdev_nvme_add_error_injection", 00:05:32.050 "bdev_nvme_get_discovery_info", 00:05:32.050 "bdev_nvme_stop_discovery", 00:05:32.050 "bdev_nvme_start_discovery", 00:05:32.050 "bdev_nvme_get_controller_health_info", 00:05:32.050 "bdev_nvme_disable_controller", 00:05:32.050 "bdev_nvme_enable_controller", 00:05:32.050 "bdev_nvme_reset_controller", 00:05:32.050 "bdev_nvme_get_transport_statistics", 00:05:32.050 "bdev_nvme_apply_firmware", 00:05:32.050 "bdev_nvme_detach_controller", 00:05:32.050 "bdev_nvme_get_controllers", 00:05:32.050 "bdev_nvme_attach_controller", 00:05:32.050 "bdev_nvme_set_hotplug", 00:05:32.050 "bdev_nvme_set_options", 00:05:32.050 "bdev_passthru_delete", 00:05:32.050 "bdev_passthru_create", 00:05:32.050 "bdev_lvol_set_parent_bdev", 00:05:32.050 "bdev_lvol_set_parent", 00:05:32.050 "bdev_lvol_check_shallow_copy", 00:05:32.050 "bdev_lvol_start_shallow_copy", 00:05:32.050 "bdev_lvol_grow_lvstore", 00:05:32.050 "bdev_lvol_get_lvols", 00:05:32.050 "bdev_lvol_get_lvstores", 00:05:32.050 "bdev_lvol_delete", 00:05:32.050 "bdev_lvol_set_read_only", 00:05:32.050 "bdev_lvol_resize", 00:05:32.050 "bdev_lvol_decouple_parent", 00:05:32.050 "bdev_lvol_inflate", 00:05:32.050 "bdev_lvol_rename", 00:05:32.050 "bdev_lvol_clone_bdev", 00:05:32.050 "bdev_lvol_clone", 00:05:32.050 "bdev_lvol_snapshot", 00:05:32.050 "bdev_lvol_create", 00:05:32.050 "bdev_lvol_delete_lvstore", 00:05:32.050 "bdev_lvol_rename_lvstore", 00:05:32.050 "bdev_lvol_create_lvstore", 00:05:32.050 "bdev_raid_set_options", 00:05:32.050 "bdev_raid_remove_base_bdev", 00:05:32.050 "bdev_raid_add_base_bdev", 00:05:32.050 "bdev_raid_delete", 00:05:32.050 "bdev_raid_create", 00:05:32.050 "bdev_raid_get_bdevs", 00:05:32.050 "bdev_error_inject_error", 00:05:32.050 "bdev_error_delete", 00:05:32.050 "bdev_error_create", 00:05:32.050 "bdev_split_delete", 00:05:32.050 "bdev_split_create", 00:05:32.050 "bdev_delay_delete", 00:05:32.050 "bdev_delay_create", 00:05:32.050 "bdev_delay_update_latency", 00:05:32.050 "bdev_zone_block_delete", 00:05:32.050 "bdev_zone_block_create", 00:05:32.050 "blobfs_create", 00:05:32.050 "blobfs_detect", 00:05:32.050 "blobfs_set_cache_size", 00:05:32.050 "bdev_aio_delete", 00:05:32.050 "bdev_aio_rescan", 00:05:32.050 "bdev_aio_create", 00:05:32.050 "bdev_ftl_set_property", 00:05:32.050 "bdev_ftl_get_properties", 00:05:32.050 "bdev_ftl_get_stats", 00:05:32.050 "bdev_ftl_unmap", 00:05:32.050 "bdev_ftl_unload", 00:05:32.050 "bdev_ftl_delete", 00:05:32.050 "bdev_ftl_load", 00:05:32.050 "bdev_ftl_create", 00:05:32.050 "bdev_virtio_attach_controller", 00:05:32.050 "bdev_virtio_scsi_get_devices", 00:05:32.050 "bdev_virtio_detach_controller", 00:05:32.050 "bdev_virtio_blk_set_hotplug", 00:05:32.050 "bdev_iscsi_delete", 00:05:32.050 "bdev_iscsi_create", 00:05:32.050 "bdev_iscsi_set_options", 00:05:32.050 "accel_error_inject_error", 00:05:32.050 "ioat_scan_accel_module", 00:05:32.050 "dsa_scan_accel_module", 00:05:32.050 "iaa_scan_accel_module", 00:05:32.050 "keyring_file_remove_key", 00:05:32.050 "keyring_file_add_key", 00:05:32.050 "keyring_linux_set_options", 00:05:32.050 "fsdev_aio_delete", 00:05:32.050 "fsdev_aio_create", 00:05:32.050 "iscsi_get_histogram", 00:05:32.050 "iscsi_enable_histogram", 00:05:32.050 "iscsi_set_options", 00:05:32.050 "iscsi_get_auth_groups", 00:05:32.050 "iscsi_auth_group_remove_secret", 00:05:32.050 "iscsi_auth_group_add_secret", 00:05:32.050 "iscsi_delete_auth_group", 00:05:32.050 "iscsi_create_auth_group", 00:05:32.050 "iscsi_set_discovery_auth", 00:05:32.050 "iscsi_get_options", 00:05:32.051 "iscsi_target_node_request_logout", 00:05:32.051 "iscsi_target_node_set_redirect", 00:05:32.051 "iscsi_target_node_set_auth", 00:05:32.051 "iscsi_target_node_add_lun", 00:05:32.051 "iscsi_get_stats", 00:05:32.051 "iscsi_get_connections", 00:05:32.051 "iscsi_portal_group_set_auth", 00:05:32.051 "iscsi_start_portal_group", 00:05:32.051 "iscsi_delete_portal_group", 00:05:32.051 "iscsi_create_portal_group", 00:05:32.051 "iscsi_get_portal_groups", 00:05:32.051 "iscsi_delete_target_node", 00:05:32.051 "iscsi_target_node_remove_pg_ig_maps", 00:05:32.051 "iscsi_target_node_add_pg_ig_maps", 00:05:32.051 "iscsi_create_target_node", 00:05:32.051 "iscsi_get_target_nodes", 00:05:32.051 "iscsi_delete_initiator_group", 00:05:32.051 "iscsi_initiator_group_remove_initiators", 00:05:32.051 "iscsi_initiator_group_add_initiators", 00:05:32.051 "iscsi_create_initiator_group", 00:05:32.051 "iscsi_get_initiator_groups", 00:05:32.051 "nvmf_set_crdt", 00:05:32.051 "nvmf_set_config", 00:05:32.051 "nvmf_set_max_subsystems", 00:05:32.051 "nvmf_stop_mdns_prr", 00:05:32.051 "nvmf_publish_mdns_prr", 00:05:32.051 "nvmf_subsystem_get_listeners", 00:05:32.051 "nvmf_subsystem_get_qpairs", 00:05:32.051 "nvmf_subsystem_get_controllers", 00:05:32.051 "nvmf_get_stats", 00:05:32.051 "nvmf_get_transports", 00:05:32.051 "nvmf_create_transport", 00:05:32.051 "nvmf_get_targets", 00:05:32.051 "nvmf_delete_target", 00:05:32.051 "nvmf_create_target", 00:05:32.051 "nvmf_subsystem_allow_any_host", 00:05:32.051 "nvmf_subsystem_set_keys", 00:05:32.051 "nvmf_subsystem_remove_host", 00:05:32.051 "nvmf_subsystem_add_host", 00:05:32.051 "nvmf_ns_remove_host", 00:05:32.051 "nvmf_ns_add_host", 00:05:32.051 "nvmf_subsystem_remove_ns", 00:05:32.051 "nvmf_subsystem_set_ns_ana_group", 00:05:32.051 "nvmf_subsystem_add_ns", 00:05:32.051 "nvmf_subsystem_listener_set_ana_state", 00:05:32.051 "nvmf_discovery_get_referrals", 00:05:32.051 "nvmf_discovery_remove_referral", 00:05:32.051 "nvmf_discovery_add_referral", 00:05:32.051 "nvmf_subsystem_remove_listener", 00:05:32.051 "nvmf_subsystem_add_listener", 00:05:32.051 "nvmf_delete_subsystem", 00:05:32.051 "nvmf_create_subsystem", 00:05:32.051 "nvmf_get_subsystems", 00:05:32.051 "env_dpdk_get_mem_stats", 00:05:32.051 "nbd_get_disks", 00:05:32.051 "nbd_stop_disk", 00:05:32.051 "nbd_start_disk", 00:05:32.051 "ublk_recover_disk", 00:05:32.051 "ublk_get_disks", 00:05:32.051 "ublk_stop_disk", 00:05:32.051 "ublk_start_disk", 00:05:32.051 "ublk_destroy_target", 00:05:32.051 "ublk_create_target", 00:05:32.051 "virtio_blk_create_transport", 00:05:32.051 "virtio_blk_get_transports", 00:05:32.051 "vhost_controller_set_coalescing", 00:05:32.051 "vhost_get_controllers", 00:05:32.051 "vhost_delete_controller", 00:05:32.051 "vhost_create_blk_controller", 00:05:32.051 "vhost_scsi_controller_remove_target", 00:05:32.051 "vhost_scsi_controller_add_target", 00:05:32.051 "vhost_start_scsi_controller", 00:05:32.051 "vhost_create_scsi_controller", 00:05:32.051 "thread_set_cpumask", 00:05:32.051 "scheduler_set_options", 00:05:32.051 "framework_get_governor", 00:05:32.051 "framework_get_scheduler", 00:05:32.051 "framework_set_scheduler", 00:05:32.051 "framework_get_reactors", 00:05:32.051 "thread_get_io_channels", 00:05:32.051 "thread_get_pollers", 00:05:32.051 "thread_get_stats", 00:05:32.051 "framework_monitor_context_switch", 00:05:32.051 "spdk_kill_instance", 00:05:32.051 "log_enable_timestamps", 00:05:32.051 "log_get_flags", 00:05:32.051 "log_clear_flag", 00:05:32.051 "log_set_flag", 00:05:32.051 "log_get_level", 00:05:32.051 "log_set_level", 00:05:32.051 "log_get_print_level", 00:05:32.051 "log_set_print_level", 00:05:32.051 "framework_enable_cpumask_locks", 00:05:32.051 "framework_disable_cpumask_locks", 00:05:32.051 "framework_wait_init", 00:05:32.051 "framework_start_init", 00:05:32.051 "scsi_get_devices", 00:05:32.051 "bdev_get_histogram", 00:05:32.051 "bdev_enable_histogram", 00:05:32.051 "bdev_set_qos_limit", 00:05:32.051 "bdev_set_qd_sampling_period", 00:05:32.051 "bdev_get_bdevs", 00:05:32.051 "bdev_reset_iostat", 00:05:32.051 "bdev_get_iostat", 00:05:32.051 "bdev_examine", 00:05:32.051 "bdev_wait_for_examine", 00:05:32.051 "bdev_set_options", 00:05:32.051 "accel_get_stats", 00:05:32.051 "accel_set_options", 00:05:32.051 "accel_set_driver", 00:05:32.051 "accel_crypto_key_destroy", 00:05:32.051 "accel_crypto_keys_get", 00:05:32.051 "accel_crypto_key_create", 00:05:32.051 "accel_assign_opc", 00:05:32.051 "accel_get_module_info", 00:05:32.051 "accel_get_opc_assignments", 00:05:32.051 "vmd_rescan", 00:05:32.051 "vmd_remove_device", 00:05:32.051 "vmd_enable", 00:05:32.051 "sock_get_default_impl", 00:05:32.051 "sock_set_default_impl", 00:05:32.051 "sock_impl_set_options", 00:05:32.051 "sock_impl_get_options", 00:05:32.051 "iobuf_get_stats", 00:05:32.051 "iobuf_set_options", 00:05:32.051 "keyring_get_keys", 00:05:32.051 "framework_get_pci_devices", 00:05:32.051 "framework_get_config", 00:05:32.051 "framework_get_subsystems", 00:05:32.051 "fsdev_set_opts", 00:05:32.051 "fsdev_get_opts", 00:05:32.051 "trace_get_info", 00:05:32.051 "trace_get_tpoint_group_mask", 00:05:32.051 "trace_disable_tpoint_group", 00:05:32.051 "trace_enable_tpoint_group", 00:05:32.051 "trace_clear_tpoint_mask", 00:05:32.051 "trace_set_tpoint_mask", 00:05:32.051 "notify_get_notifications", 00:05:32.051 "notify_get_types", 00:05:32.051 "spdk_get_version", 00:05:32.051 "rpc_get_methods" 00:05:32.051 ] 00:05:32.051 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.051 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:32.051 18:12:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2822185 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2822185 ']' 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2822185 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822185 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822185' 00:05:32.051 killing process with pid 2822185 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2822185 00:05:32.051 18:12:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2822185 00:05:34.580 00:05:34.580 real 0m4.235s 00:05:34.580 user 0m7.706s 00:05:34.580 sys 0m0.706s 00:05:34.580 18:12:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.580 18:12:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.580 ************************************ 00:05:34.580 END TEST spdkcli_tcp 00:05:34.580 ************************************ 00:05:34.580 18:12:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.580 18:12:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.580 18:12:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.580 18:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.580 ************************************ 00:05:34.580 START TEST dpdk_mem_utility 00:05:34.580 ************************************ 00:05:34.580 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.580 * Looking for test storage... 00:05:34.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.580 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.580 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.580 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.838 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:34.838 18:12:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.839 18:12:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.839 --rc genhtml_branch_coverage=1 00:05:34.839 --rc genhtml_function_coverage=1 00:05:34.839 --rc genhtml_legend=1 00:05:34.839 --rc geninfo_all_blocks=1 00:05:34.839 --rc geninfo_unexecuted_blocks=1 00:05:34.839 00:05:34.839 ' 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.839 --rc genhtml_branch_coverage=1 00:05:34.839 --rc genhtml_function_coverage=1 00:05:34.839 --rc genhtml_legend=1 00:05:34.839 --rc geninfo_all_blocks=1 00:05:34.839 --rc geninfo_unexecuted_blocks=1 00:05:34.839 00:05:34.839 ' 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.839 --rc genhtml_branch_coverage=1 00:05:34.839 --rc genhtml_function_coverage=1 00:05:34.839 --rc genhtml_legend=1 00:05:34.839 --rc geninfo_all_blocks=1 00:05:34.839 --rc geninfo_unexecuted_blocks=1 00:05:34.839 00:05:34.839 ' 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.839 --rc genhtml_branch_coverage=1 00:05:34.839 --rc genhtml_function_coverage=1 00:05:34.839 --rc genhtml_legend=1 00:05:34.839 --rc geninfo_all_blocks=1 00:05:34.839 --rc geninfo_unexecuted_blocks=1 00:05:34.839 00:05:34.839 ' 00:05:34.839 18:12:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.839 18:12:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2822791 00:05:34.839 18:12:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.839 18:12:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2822791 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2822791 ']' 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.839 [2024-11-18 18:12:33.028195] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:34.839 [2024-11-18 18:12:33.028342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822791 ] 00:05:34.839 [2024-11-18 18:12:33.160316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.097 [2024-11-18 18:12:33.293251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.031 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.031 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:36.031 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:36.031 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:36.031 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.031 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.031 { 00:05:36.031 "filename": "/tmp/spdk_mem_dump.txt" 00:05:36.031 } 00:05:36.031 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.031 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.031 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:36.031 1 heaps totaling size 816.000000 MiB 00:05:36.031 size: 816.000000 MiB heap id: 0 00:05:36.031 end heaps---------- 00:05:36.031 9 mempools totaling size 595.772034 MiB 00:05:36.031 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:36.031 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:36.031 size: 92.545471 MiB name: bdev_io_2822791 00:05:36.031 size: 50.003479 MiB name: msgpool_2822791 00:05:36.031 size: 36.509338 MiB name: fsdev_io_2822791 00:05:36.031 size: 21.763794 MiB name: PDU_Pool 00:05:36.031 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:36.031 size: 4.133484 MiB name: evtpool_2822791 00:05:36.031 size: 0.026123 MiB name: Session_Pool 00:05:36.031 end mempools------- 00:05:36.031 6 memzones totaling size 4.142822 MiB 00:05:36.031 size: 1.000366 MiB name: RG_ring_0_2822791 00:05:36.031 size: 1.000366 MiB name: RG_ring_1_2822791 00:05:36.031 size: 1.000366 MiB name: RG_ring_4_2822791 00:05:36.031 size: 1.000366 MiB name: RG_ring_5_2822791 00:05:36.031 size: 0.125366 MiB name: RG_ring_2_2822791 00:05:36.031 size: 0.015991 MiB name: RG_ring_3_2822791 00:05:36.031 end memzones------- 00:05:36.031 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:36.031 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:36.031 list of free elements. size: 16.857605 MiB 00:05:36.031 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:36.031 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:36.031 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:36.031 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:36.031 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:36.031 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:36.031 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:36.031 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:36.031 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:36.031 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:36.031 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:36.032 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:36.032 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:36.032 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:36.032 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:36.032 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:36.032 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:36.032 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:36.032 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:36.032 list of standard malloc elements. size: 199.221497 MiB 00:05:36.032 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:36.032 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:36.032 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:36.032 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:36.032 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:36.032 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:36.032 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:36.032 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:36.032 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:36.032 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:36.032 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:36.032 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:36.032 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:36.032 list of memzone associated elements. size: 599.920898 MiB 00:05:36.032 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:36.032 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:36.032 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:36.032 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:36.032 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:36.032 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2822791_0 00:05:36.032 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:36.032 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2822791_0 00:05:36.032 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:36.032 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2822791_0 00:05:36.032 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:36.032 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:36.032 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:36.032 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:36.032 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:36.032 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2822791_0 00:05:36.032 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:36.032 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2822791 00:05:36.032 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:36.032 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2822791 00:05:36.032 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:36.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:36.032 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:36.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:36.032 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:36.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:36.032 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:36.032 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:36.032 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:36.032 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2822791 00:05:36.032 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:36.032 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2822791 00:05:36.032 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:36.032 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2822791 00:05:36.032 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:36.032 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2822791 00:05:36.032 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:36.032 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2822791 00:05:36.032 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:36.032 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2822791 00:05:36.032 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:36.032 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:36.032 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:36.032 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:36.032 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:36.032 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:36.032 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:36.032 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2822791 00:05:36.032 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:36.032 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2822791 00:05:36.032 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:36.032 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:36.032 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:36.032 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:36.032 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:36.032 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2822791 00:05:36.032 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:36.032 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:36.032 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:36.032 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2822791 00:05:36.032 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:36.032 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2822791 00:05:36.032 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:36.032 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2822791 00:05:36.032 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:36.032 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:36.032 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:36.032 18:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2822791 00:05:36.032 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2822791 ']' 00:05:36.032 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2822791 00:05:36.032 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:36.032 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.032 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822791 00:05:36.290 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.290 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.290 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822791' 00:05:36.290 killing process with pid 2822791 00:05:36.290 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2822791 00:05:36.290 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2822791 00:05:38.819 00:05:38.819 real 0m3.997s 00:05:38.819 user 0m4.038s 00:05:38.819 sys 0m0.643s 00:05:38.819 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.819 18:12:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 END TEST dpdk_mem_utility 00:05:38.819 ************************************ 00:05:38.819 18:12:36 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:38.819 18:12:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.819 18:12:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.819 18:12:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 START TEST event 00:05:38.819 ************************************ 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:38.819 * Looking for test storage... 00:05:38.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.819 18:12:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.819 18:12:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.819 18:12:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.819 18:12:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.819 18:12:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.819 18:12:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.819 18:12:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.819 18:12:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.819 18:12:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.819 18:12:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.819 18:12:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.819 18:12:36 event -- scripts/common.sh@344 -- # case "$op" in 00:05:38.819 18:12:36 event -- scripts/common.sh@345 -- # : 1 00:05:38.819 18:12:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.819 18:12:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.819 18:12:36 event -- scripts/common.sh@365 -- # decimal 1 00:05:38.819 18:12:36 event -- scripts/common.sh@353 -- # local d=1 00:05:38.819 18:12:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.819 18:12:36 event -- scripts/common.sh@355 -- # echo 1 00:05:38.819 18:12:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.819 18:12:36 event -- scripts/common.sh@366 -- # decimal 2 00:05:38.819 18:12:36 event -- scripts/common.sh@353 -- # local d=2 00:05:38.819 18:12:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.819 18:12:36 event -- scripts/common.sh@355 -- # echo 2 00:05:38.819 18:12:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.819 18:12:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.819 18:12:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.819 18:12:36 event -- scripts/common.sh@368 -- # return 0 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.819 --rc genhtml_branch_coverage=1 00:05:38.819 --rc genhtml_function_coverage=1 00:05:38.819 --rc genhtml_legend=1 00:05:38.819 --rc geninfo_all_blocks=1 00:05:38.819 --rc geninfo_unexecuted_blocks=1 00:05:38.819 00:05:38.819 ' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.819 --rc genhtml_branch_coverage=1 00:05:38.819 --rc genhtml_function_coverage=1 00:05:38.819 --rc genhtml_legend=1 00:05:38.819 --rc geninfo_all_blocks=1 00:05:38.819 --rc geninfo_unexecuted_blocks=1 00:05:38.819 00:05:38.819 ' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.819 --rc genhtml_branch_coverage=1 00:05:38.819 --rc genhtml_function_coverage=1 00:05:38.819 --rc genhtml_legend=1 00:05:38.819 --rc geninfo_all_blocks=1 00:05:38.819 --rc geninfo_unexecuted_blocks=1 00:05:38.819 00:05:38.819 ' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.819 --rc genhtml_branch_coverage=1 00:05:38.819 --rc genhtml_function_coverage=1 00:05:38.819 --rc genhtml_legend=1 00:05:38.819 --rc geninfo_all_blocks=1 00:05:38.819 --rc geninfo_unexecuted_blocks=1 00:05:38.819 00:05:38.819 ' 00:05:38.819 18:12:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:38.819 18:12:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.819 18:12:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:38.819 18:12:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.819 18:12:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.819 ************************************ 00:05:38.819 START TEST event_perf 00:05:38.819 ************************************ 00:05:38.819 18:12:37 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.819 Running I/O for 1 seconds...[2024-11-18 18:12:37.036933] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:38.819 [2024-11-18 18:12:37.037072] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823264 ] 00:05:39.078 [2024-11-18 18:12:37.174533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.078 [2024-11-18 18:12:37.321693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.078 [2024-11-18 18:12:37.321732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.078 [2024-11-18 18:12:37.321763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.078 [2024-11-18 18:12:37.321753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.451 Running I/O for 1 seconds... 00:05:40.451 lcore 0: 220759 00:05:40.451 lcore 1: 220758 00:05:40.451 lcore 2: 220759 00:05:40.451 lcore 3: 220757 00:05:40.451 done. 00:05:40.451 00:05:40.451 real 0m1.586s 00:05:40.451 user 0m4.431s 00:05:40.451 sys 0m0.141s 00:05:40.451 18:12:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.451 18:12:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.451 ************************************ 00:05:40.451 END TEST event_perf 00:05:40.451 ************************************ 00:05:40.451 18:12:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.451 18:12:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.451 18:12:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.451 18:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.451 ************************************ 00:05:40.451 START TEST event_reactor 00:05:40.451 ************************************ 00:05:40.451 18:12:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.451 [2024-11-18 18:12:38.675364] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:40.451 [2024-11-18 18:12:38.675478] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823549 ] 00:05:40.709 [2024-11-18 18:12:38.818170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.709 [2024-11-18 18:12:38.956462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.083 test_start 00:05:42.083 oneshot 00:05:42.083 tick 100 00:05:42.083 tick 100 00:05:42.083 tick 250 00:05:42.083 tick 100 00:05:42.083 tick 100 00:05:42.083 tick 100 00:05:42.083 tick 250 00:05:42.083 tick 500 00:05:42.083 tick 100 00:05:42.083 tick 100 00:05:42.083 tick 250 00:05:42.083 tick 100 00:05:42.083 tick 100 00:05:42.083 test_end 00:05:42.083 00:05:42.083 real 0m1.575s 00:05:42.083 user 0m1.434s 00:05:42.083 sys 0m0.134s 00:05:42.083 18:12:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.083 18:12:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.083 ************************************ 00:05:42.083 END TEST event_reactor 00:05:42.083 ************************************ 00:05:42.083 18:12:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.083 18:12:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.083 18:12:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.083 18:12:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.083 ************************************ 00:05:42.083 START TEST event_reactor_perf 00:05:42.083 ************************************ 00:05:42.083 18:12:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.083 [2024-11-18 18:12:40.304193] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:42.083 [2024-11-18 18:12:40.304316] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823702 ] 00:05:42.341 [2024-11-18 18:12:40.452642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.341 [2024-11-18 18:12:40.587745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.714 test_start 00:05:43.714 test_end 00:05:43.714 Performance: 270276 events per second 00:05:43.714 00:05:43.714 real 0m1.577s 00:05:43.714 user 0m1.409s 00:05:43.714 sys 0m0.157s 00:05:43.714 18:12:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.714 18:12:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.714 ************************************ 00:05:43.714 END TEST event_reactor_perf 00:05:43.714 ************************************ 00:05:43.714 18:12:41 event -- event/event.sh@49 -- # uname -s 00:05:43.714 18:12:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.714 18:12:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.714 18:12:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.714 18:12:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.714 18:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.714 ************************************ 00:05:43.714 START TEST event_scheduler 00:05:43.714 ************************************ 00:05:43.714 18:12:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.714 * Looking for test storage... 00:05:43.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:43.714 18:12:41 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.714 18:12:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.714 18:12:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.714 18:12:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.714 --rc genhtml_branch_coverage=1 00:05:43.714 --rc genhtml_function_coverage=1 00:05:43.714 --rc genhtml_legend=1 00:05:43.714 --rc geninfo_all_blocks=1 00:05:43.714 --rc geninfo_unexecuted_blocks=1 00:05:43.714 00:05:43.714 ' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.714 --rc genhtml_branch_coverage=1 00:05:43.714 --rc genhtml_function_coverage=1 00:05:43.714 --rc genhtml_legend=1 00:05:43.714 --rc geninfo_all_blocks=1 00:05:43.714 --rc geninfo_unexecuted_blocks=1 00:05:43.714 00:05:43.714 ' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.714 --rc genhtml_branch_coverage=1 00:05:43.714 --rc genhtml_function_coverage=1 00:05:43.714 --rc genhtml_legend=1 00:05:43.714 --rc geninfo_all_blocks=1 00:05:43.714 --rc geninfo_unexecuted_blocks=1 00:05:43.714 00:05:43.714 ' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.714 --rc genhtml_branch_coverage=1 00:05:43.714 --rc genhtml_function_coverage=1 00:05:43.714 --rc genhtml_legend=1 00:05:43.714 --rc geninfo_all_blocks=1 00:05:43.714 --rc geninfo_unexecuted_blocks=1 00:05:43.714 00:05:43.714 ' 00:05:43.714 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.714 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2824022 00:05:43.714 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.714 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.714 18:12:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2824022 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2824022 ']' 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.714 18:12:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.972 [2024-11-18 18:12:42.120057] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:43.973 [2024-11-18 18:12:42.120205] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824022 ] 00:05:43.973 [2024-11-18 18:12:42.255760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.230 [2024-11-18 18:12:42.381665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.230 [2024-11-18 18:12:42.381719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.230 [2024-11-18 18:12:42.381774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.230 [2024-11-18 18:12:42.381783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:44.796 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.796 [2024-11-18 18:12:43.092941] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:44.796 [2024-11-18 18:12:43.092994] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.796 [2024-11-18 18:12:43.093028] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.796 [2024-11-18 18:12:43.093047] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.796 [2024-11-18 18:12:43.093076] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.796 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.796 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 [2024-11-18 18:12:43.401064] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.363 18:12:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.363 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.363 18:12:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.363 18:12:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.363 18:12:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 ************************************ 00:05:45.363 START TEST scheduler_create_thread 00:05:45.363 ************************************ 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 2 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.363 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 3 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 4 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 5 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 6 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 7 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 8 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 9 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 10 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.364 00:05:45.364 real 0m0.109s 00:05:45.364 user 0m0.012s 00:05:45.364 sys 0m0.001s 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.364 18:12:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 ************************************ 00:05:45.364 END TEST scheduler_create_thread 00:05:45.364 ************************************ 00:05:45.364 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:45.364 18:12:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2824022 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2824022 ']' 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2824022 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824022 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824022' 00:05:45.364 killing process with pid 2824022 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2824022 00:05:45.364 18:12:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2824022 00:05:45.930 [2024-11-18 18:12:44.024138] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:46.864 00:05:46.864 real 0m3.085s 00:05:46.864 user 0m5.399s 00:05:46.864 sys 0m0.512s 00:05:46.864 18:12:44 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.864 18:12:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 ************************************ 00:05:46.864 END TEST event_scheduler 00:05:46.864 ************************************ 00:05:46.864 18:12:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:46.864 18:12:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:46.864 18:12:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.864 18:12:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.864 18:12:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 ************************************ 00:05:46.864 START TEST app_repeat 00:05:46.864 ************************************ 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2824395 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2824395' 00:05:46.864 Process app_repeat pid: 2824395 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:46.864 spdk_app_start Round 0 00:05:46.864 18:12:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2824395 /var/tmp/spdk-nbd.sock 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2824395 ']' 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.864 18:12:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.864 [2024-11-18 18:12:45.084074] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:46.864 [2024-11-18 18:12:45.084240] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824395 ] 00:05:47.123 [2024-11-18 18:12:45.226932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.123 [2024-11-18 18:12:45.365910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.123 [2024-11-18 18:12:45.365916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.056 18:12:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.056 18:12:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.056 18:12:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.314 Malloc0 00:05:48.314 18:12:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.572 Malloc1 00:05:48.572 18:12:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.572 18:12:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.830 /dev/nbd0 00:05:48.830 18:12:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.830 18:12:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.830 1+0 records in 00:05:48.830 1+0 records out 00:05:48.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197513 s, 20.7 MB/s 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.830 18:12:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.830 18:12:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.830 18:12:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.830 18:12:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.088 /dev/nbd1 00:05:49.088 18:12:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.088 18:12:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.088 1+0 records in 00:05:49.088 1+0 records out 00:05:49.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202857 s, 20.2 MB/s 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.088 18:12:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.089 18:12:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.089 18:12:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.089 18:12:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.089 18:12:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.089 18:12:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.089 18:12:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.089 18:12:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.089 18:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.654 { 00:05:49.654 "nbd_device": "/dev/nbd0", 00:05:49.654 "bdev_name": "Malloc0" 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "nbd_device": "/dev/nbd1", 00:05:49.654 "bdev_name": "Malloc1" 00:05:49.654 } 00:05:49.654 ]' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.654 { 00:05:49.654 "nbd_device": "/dev/nbd0", 00:05:49.654 "bdev_name": "Malloc0" 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "nbd_device": "/dev/nbd1", 00:05:49.654 "bdev_name": "Malloc1" 00:05:49.654 } 00:05:49.654 ]' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.654 /dev/nbd1' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.654 /dev/nbd1' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.654 256+0 records in 00:05:49.654 256+0 records out 00:05:49.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474668 s, 221 MB/s 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.654 256+0 records in 00:05:49.654 256+0 records out 00:05:49.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264646 s, 39.6 MB/s 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.654 256+0 records in 00:05:49.654 256+0 records out 00:05:49.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284813 s, 36.8 MB/s 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.654 18:12:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.655 18:12:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.912 18:12:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.170 18:12:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.427 18:12:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.428 18:12:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.428 18:12:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.428 18:12:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.428 18:12:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.428 18:12:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.993 18:12:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.366 [2024-11-18 18:12:50.364853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.366 [2024-11-18 18:12:50.499892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.366 [2024-11-18 18:12:50.499896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.625 [2024-11-18 18:12:50.709214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.625 [2024-11-18 18:12:50.709279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.094 18:12:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.094 18:12:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.094 spdk_app_start Round 1 00:05:54.094 18:12:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2824395 /var/tmp/spdk-nbd.sock 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2824395 ']' 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.094 18:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.094 18:12:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.660 Malloc0 00:05:54.660 18:12:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.924 Malloc1 00:05:54.924 18:12:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.924 18:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.182 /dev/nbd0 00:05:55.182 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.182 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.182 1+0 records in 00:05:55.182 1+0 records out 00:05:55.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267729 s, 15.3 MB/s 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.182 18:12:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.182 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.182 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.182 18:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.439 /dev/nbd1 00:05:55.439 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.439 18:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.439 1+0 records in 00:05:55.439 1+0 records out 00:05:55.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224326 s, 18.3 MB/s 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.439 18:12:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.439 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.439 18:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.440 18:12:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.440 18:12:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.440 18:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.697 18:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.697 { 00:05:55.697 "nbd_device": "/dev/nbd0", 00:05:55.697 "bdev_name": "Malloc0" 00:05:55.697 }, 00:05:55.697 { 00:05:55.697 "nbd_device": "/dev/nbd1", 00:05:55.697 "bdev_name": "Malloc1" 00:05:55.697 } 00:05:55.697 ]' 00:05:55.697 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.697 { 00:05:55.697 "nbd_device": "/dev/nbd0", 00:05:55.697 "bdev_name": "Malloc0" 00:05:55.697 }, 00:05:55.697 { 00:05:55.697 "nbd_device": "/dev/nbd1", 00:05:55.697 "bdev_name": "Malloc1" 00:05:55.697 } 00:05:55.697 ]' 00:05:55.697 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.956 /dev/nbd1' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.956 /dev/nbd1' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.956 256+0 records in 00:05:55.956 256+0 records out 00:05:55.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381273 s, 275 MB/s 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.956 256+0 records in 00:05:55.956 256+0 records out 00:05:55.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249294 s, 42.1 MB/s 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.956 256+0 records in 00:05:55.956 256+0 records out 00:05:55.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304149 s, 34.5 MB/s 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.956 18:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.215 18:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.472 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.472 18:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.473 18:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.730 18:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.731 18:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.731 18:12:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.731 18:12:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.731 18:12:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.296 18:12:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.669 [2024-11-18 18:12:56.744314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.669 [2024-11-18 18:12:56.879362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.669 [2024-11-18 18:12:56.879362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.927 [2024-11-18 18:12:57.094873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.927 [2024-11-18 18:12:57.094982] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.299 18:12:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.299 18:12:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.299 spdk_app_start Round 2 00:06:00.299 18:12:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2824395 /var/tmp/spdk-nbd.sock 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2824395 ']' 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.300 18:12:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.558 18:12:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.558 18:12:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.558 18:12:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.816 Malloc0 00:06:00.817 18:12:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.426 Malloc1 00:06:01.426 18:12:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.426 18:12:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.427 18:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.684 /dev/nbd0 00:06:01.684 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.684 18:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.685 1+0 records in 00:06:01.685 1+0 records out 00:06:01.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251493 s, 16.3 MB/s 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.685 18:12:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.685 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.685 18:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.685 18:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.942 /dev/nbd1 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.942 1+0 records in 00:06:01.942 1+0 records out 00:06:01.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262192 s, 15.6 MB/s 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.942 18:13:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.942 18:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.200 { 00:06:02.200 "nbd_device": "/dev/nbd0", 00:06:02.200 "bdev_name": "Malloc0" 00:06:02.200 }, 00:06:02.200 { 00:06:02.200 "nbd_device": "/dev/nbd1", 00:06:02.200 "bdev_name": "Malloc1" 00:06:02.200 } 00:06:02.200 ]' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.200 { 00:06:02.200 "nbd_device": "/dev/nbd0", 00:06:02.200 "bdev_name": "Malloc0" 00:06:02.200 }, 00:06:02.200 { 00:06:02.200 "nbd_device": "/dev/nbd1", 00:06:02.200 "bdev_name": "Malloc1" 00:06:02.200 } 00:06:02.200 ]' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.200 /dev/nbd1' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.200 /dev/nbd1' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.200 256+0 records in 00:06:02.200 256+0 records out 00:06:02.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515449 s, 203 MB/s 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.200 256+0 records in 00:06:02.200 256+0 records out 00:06:02.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268416 s, 39.1 MB/s 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.200 18:13:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.458 256+0 records in 00:06:02.458 256+0 records out 00:06:02.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293811 s, 35.7 MB/s 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.458 18:13:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.715 18:13:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.716 18:13:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.973 18:13:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.231 18:13:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.231 18:13:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.795 18:13:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.170 [2024-11-18 18:13:03.149366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.170 [2024-11-18 18:13:03.284051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.170 [2024-11-18 18:13:03.284056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.170 [2024-11-18 18:13:03.497345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.170 [2024-11-18 18:13:03.497432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.069 18:13:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2824395 /var/tmp/spdk-nbd.sock 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2824395 ']' 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.069 18:13:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.069 18:13:05 event.app_repeat -- event/event.sh@39 -- # killprocess 2824395 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2824395 ']' 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2824395 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824395 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824395' 00:06:07.069 killing process with pid 2824395 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2824395 00:06:07.069 18:13:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2824395 00:06:08.003 spdk_app_start is called in Round 0. 00:06:08.003 Shutdown signal received, stop current app iteration 00:06:08.003 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:08.003 spdk_app_start is called in Round 1. 00:06:08.003 Shutdown signal received, stop current app iteration 00:06:08.003 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:08.003 spdk_app_start is called in Round 2. 00:06:08.003 Shutdown signal received, stop current app iteration 00:06:08.003 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:08.003 spdk_app_start is called in Round 3. 00:06:08.003 Shutdown signal received, stop current app iteration 00:06:08.003 18:13:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.003 18:13:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.003 00:06:08.003 real 0m21.258s 00:06:08.003 user 0m45.348s 00:06:08.003 sys 0m3.366s 00:06:08.003 18:13:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.003 18:13:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.003 ************************************ 00:06:08.003 END TEST app_repeat 00:06:08.003 ************************************ 00:06:08.003 18:13:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.003 18:13:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.003 18:13:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.003 18:13:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.003 18:13:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.262 ************************************ 00:06:08.262 START TEST cpu_locks 00:06:08.262 ************************************ 00:06:08.262 18:13:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.262 * Looking for test storage... 00:06:08.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:08.262 18:13:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.262 18:13:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.262 18:13:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.262 18:13:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:08.262 18:13:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.263 18:13:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.263 --rc genhtml_branch_coverage=1 00:06:08.263 --rc genhtml_function_coverage=1 00:06:08.263 --rc genhtml_legend=1 00:06:08.263 --rc geninfo_all_blocks=1 00:06:08.263 --rc geninfo_unexecuted_blocks=1 00:06:08.263 00:06:08.263 ' 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.263 --rc genhtml_branch_coverage=1 00:06:08.263 --rc genhtml_function_coverage=1 00:06:08.263 --rc genhtml_legend=1 00:06:08.263 --rc geninfo_all_blocks=1 00:06:08.263 --rc geninfo_unexecuted_blocks=1 00:06:08.263 00:06:08.263 ' 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.263 --rc genhtml_branch_coverage=1 00:06:08.263 --rc genhtml_function_coverage=1 00:06:08.263 --rc genhtml_legend=1 00:06:08.263 --rc geninfo_all_blocks=1 00:06:08.263 --rc geninfo_unexecuted_blocks=1 00:06:08.263 00:06:08.263 ' 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.263 --rc genhtml_branch_coverage=1 00:06:08.263 --rc genhtml_function_coverage=1 00:06:08.263 --rc genhtml_legend=1 00:06:08.263 --rc geninfo_all_blocks=1 00:06:08.263 --rc geninfo_unexecuted_blocks=1 00:06:08.263 00:06:08.263 ' 00:06:08.263 18:13:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.263 18:13:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.263 18:13:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.263 18:13:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.263 18:13:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.263 ************************************ 00:06:08.263 START TEST default_locks 00:06:08.263 ************************************ 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2827227 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2827227 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2827227 ']' 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.263 18:13:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.263 [2024-11-18 18:13:06.587368] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:08.263 [2024-11-18 18:13:06.587518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827227 ] 00:06:08.521 [2024-11-18 18:13:06.732726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.779 [2024-11-18 18:13:06.871465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.713 18:13:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.713 18:13:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:09.713 18:13:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2827227 00:06:09.713 18:13:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2827227 00:06:09.713 18:13:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.971 lslocks: write error 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2827227 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2827227 ']' 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2827227 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827227 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827227' 00:06:09.971 killing process with pid 2827227 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2827227 00:06:09.971 18:13:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2827227 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2827227 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2827227 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2827227 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2827227 ']' 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2827227) - No such process 00:06:12.500 ERROR: process (pid: 2827227) is no longer running 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.500 00:06:12.500 real 0m4.118s 00:06:12.500 user 0m4.164s 00:06:12.500 sys 0m0.740s 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.500 18:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.500 ************************************ 00:06:12.500 END TEST default_locks 00:06:12.500 ************************************ 00:06:12.500 18:13:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.500 18:13:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.500 18:13:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.500 18:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.500 ************************************ 00:06:12.500 START TEST default_locks_via_rpc 00:06:12.500 ************************************ 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2827669 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2827669 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2827669 ']' 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.500 18:13:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.500 [2024-11-18 18:13:10.764445] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:12.500 [2024-11-18 18:13:10.764589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827669 ] 00:06:12.758 [2024-11-18 18:13:10.902346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.758 [2024-11-18 18:13:11.034051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2827669 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2827669 00:06:13.693 18:13:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2827669 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2827669 ']' 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2827669 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827669 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827669' 00:06:14.260 killing process with pid 2827669 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2827669 00:06:14.260 18:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2827669 00:06:16.790 00:06:16.790 real 0m4.081s 00:06:16.790 user 0m4.075s 00:06:16.790 sys 0m0.775s 00:06:16.790 18:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.790 18:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.790 ************************************ 00:06:16.790 END TEST default_locks_via_rpc 00:06:16.790 ************************************ 00:06:16.790 18:13:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.790 18:13:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.790 18:13:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.790 18:13:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.790 ************************************ 00:06:16.790 START TEST non_locking_app_on_locked_coremask 00:06:16.790 ************************************ 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2828227 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2828227 /var/tmp/spdk.sock 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2828227 ']' 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.790 18:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.790 [2024-11-18 18:13:14.892062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:16.790 [2024-11-18 18:13:14.892220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828227 ] 00:06:16.790 [2024-11-18 18:13:15.034163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.048 [2024-11-18 18:13:15.172185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2828363 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2828363 /var/tmp/spdk2.sock 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2828363 ']' 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.984 18:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.984 [2024-11-18 18:13:16.220419] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:17.984 [2024-11-18 18:13:16.220568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828363 ] 00:06:18.242 [2024-11-18 18:13:16.406261] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.242 [2024-11-18 18:13:16.406341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.500 [2024-11-18 18:13:16.684759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.032 18:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.032 18:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.032 18:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2828227 00:06:21.032 18:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2828227 00:06:21.032 18:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.032 lslocks: write error 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2828227 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2828227 ']' 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2828227 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.032 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828227 00:06:21.290 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.290 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.290 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828227' 00:06:21.290 killing process with pid 2828227 00:06:21.291 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2828227 00:06:21.291 18:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2828227 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2828363 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2828363 ']' 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2828363 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828363 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828363' 00:06:26.557 killing process with pid 2828363 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2828363 00:06:26.557 18:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2828363 00:06:28.455 00:06:28.455 real 0m11.911s 00:06:28.455 user 0m12.265s 00:06:28.455 sys 0m1.491s 00:06:28.456 18:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.456 18:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.456 ************************************ 00:06:28.456 END TEST non_locking_app_on_locked_coremask 00:06:28.456 ************************************ 00:06:28.456 18:13:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.456 18:13:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.456 18:13:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.456 18:13:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.456 ************************************ 00:06:28.456 START TEST locking_app_on_unlocked_coremask 00:06:28.456 ************************************ 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2829721 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2829721 /var/tmp/spdk.sock 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829721 ']' 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.456 18:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.714 [2024-11-18 18:13:26.853256] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:28.714 [2024-11-18 18:13:26.853401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829721 ] 00:06:28.714 [2024-11-18 18:13:26.987160] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.714 [2024-11-18 18:13:26.987229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.972 [2024-11-18 18:13:27.121082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2829860 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2829860 /var/tmp/spdk2.sock 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829860 ']' 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.906 18:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.906 [2024-11-18 18:13:28.126371] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:29.906 [2024-11-18 18:13:28.126516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829860 ] 00:06:30.191 [2024-11-18 18:13:28.338055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.478 [2024-11-18 18:13:28.618861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.011 18:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.011 18:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.011 18:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2829860 00:06:33.011 18:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829860 00:06:33.011 18:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.011 lslocks: write error 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2829721 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829721 ']' 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829721 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829721 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829721' 00:06:33.011 killing process with pid 2829721 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2829721 00:06:33.011 18:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2829721 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2829860 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829860 ']' 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829860 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829860 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829860' 00:06:38.277 killing process with pid 2829860 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2829860 00:06:38.277 18:13:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2829860 00:06:40.806 00:06:40.806 real 0m11.906s 00:06:40.806 user 0m12.319s 00:06:40.806 sys 0m1.405s 00:06:40.806 18:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.806 18:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.806 ************************************ 00:06:40.806 END TEST locking_app_on_unlocked_coremask 00:06:40.806 ************************************ 00:06:40.806 18:13:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.806 18:13:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.806 18:13:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.806 18:13:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.806 ************************************ 00:06:40.806 START TEST locking_app_on_locked_coremask 00:06:40.806 ************************************ 00:06:40.806 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:40.806 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2831105 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2831105 /var/tmp/spdk.sock 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831105 ']' 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.807 18:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.807 [2024-11-18 18:13:38.807568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:40.807 [2024-11-18 18:13:38.807750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831105 ] 00:06:40.807 [2024-11-18 18:13:38.943167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.807 [2024-11-18 18:13:39.076358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.740 18:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.740 18:13:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2831245 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2831245 /var/tmp/spdk2.sock 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2831245 /var/tmp/spdk2.sock 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2831245 /var/tmp/spdk2.sock 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831245 ']' 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.740 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.998 [2024-11-18 18:13:40.106015] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:41.998 [2024-11-18 18:13:40.106170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831245 ] 00:06:41.998 [2024-11-18 18:13:40.303993] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2831105 has claimed it. 00:06:41.998 [2024-11-18 18:13:40.304094] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2831245) - No such process 00:06:42.563 ERROR: process (pid: 2831245) is no longer running 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2831105 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2831105 00:06:42.563 18:13:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.821 lslocks: write error 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2831105 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831105 ']' 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831105 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831105 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831105' 00:06:42.821 killing process with pid 2831105 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2831105 00:06:42.821 18:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2831105 00:06:45.350 00:06:45.350 real 0m4.792s 00:06:45.350 user 0m5.113s 00:06:45.350 sys 0m0.915s 00:06:45.351 18:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.351 18:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.351 ************************************ 00:06:45.351 END TEST locking_app_on_locked_coremask 00:06:45.351 ************************************ 00:06:45.351 18:13:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.351 18:13:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.351 18:13:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.351 18:13:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.351 ************************************ 00:06:45.351 START TEST locking_overlapped_coremask 00:06:45.351 ************************************ 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2831680 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2831680 /var/tmp/spdk.sock 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831680 ']' 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.351 18:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.351 [2024-11-18 18:13:43.647993] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:45.351 [2024-11-18 18:13:43.648150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831680 ] 00:06:45.609 [2024-11-18 18:13:43.794137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.609 [2024-11-18 18:13:43.939433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.609 [2024-11-18 18:13:43.939493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.609 [2024-11-18 18:13:43.939499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2831938 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2831938 /var/tmp/spdk2.sock 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2831938 /var/tmp/spdk2.sock 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2831938 /var/tmp/spdk2.sock 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831938 ']' 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.982 18:13:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.982 [2024-11-18 18:13:45.007138] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:46.982 [2024-11-18 18:13:45.007293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831938 ] 00:06:46.982 [2024-11-18 18:13:45.219830] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2831680 has claimed it. 00:06:46.982 [2024-11-18 18:13:45.219931] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2831938) - No such process 00:06:47.547 ERROR: process (pid: 2831938) is no longer running 00:06:47.547 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.547 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.547 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2831680 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831680 ']' 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2831680 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831680 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831680' 00:06:47.548 killing process with pid 2831680 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2831680 00:06:47.548 18:13:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2831680 00:06:50.078 00:06:50.078 real 0m4.653s 00:06:50.078 user 0m12.718s 00:06:50.078 sys 0m0.771s 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.078 ************************************ 00:06:50.078 END TEST locking_overlapped_coremask 00:06:50.078 ************************************ 00:06:50.078 18:13:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.078 18:13:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.078 18:13:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.078 18:13:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.078 ************************************ 00:06:50.078 START TEST locking_overlapped_coremask_via_rpc 00:06:50.078 ************************************ 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2832362 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2832362 /var/tmp/spdk.sock 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2832362 ']' 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.078 18:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.078 [2024-11-18 18:13:48.355732] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:50.078 [2024-11-18 18:13:48.355873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832362 ] 00:06:50.336 [2024-11-18 18:13:48.488228] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.336 [2024-11-18 18:13:48.488291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.336 [2024-11-18 18:13:48.626054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.336 [2024-11-18 18:13:48.626116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.336 [2024-11-18 18:13:48.626125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2832506 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2832506 /var/tmp/spdk2.sock 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2832506 ']' 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.269 18:13:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.269 [2024-11-18 18:13:49.574864] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:51.269 [2024-11-18 18:13:49.575026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832506 ] 00:06:51.527 [2024-11-18 18:13:49.768314] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.527 [2024-11-18 18:13:49.768376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.785 [2024-11-18 18:13:50.030599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.785 [2024-11-18 18:13:50.033667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.785 [2024-11-18 18:13:50.033678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:54.312 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.313 [2024-11-18 18:13:52.302779] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2832362 has claimed it. 00:06:54.313 request: 00:06:54.313 { 00:06:54.313 "method": "framework_enable_cpumask_locks", 00:06:54.313 "req_id": 1 00:06:54.313 } 00:06:54.313 Got JSON-RPC error response 00:06:54.313 response: 00:06:54.313 { 00:06:54.313 "code": -32603, 00:06:54.313 "message": "Failed to claim CPU core: 2" 00:06:54.313 } 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2832362 /var/tmp/spdk.sock 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2832362 ']' 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2832506 /var/tmp/spdk2.sock 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2832506 ']' 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.313 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.571 00:06:54.571 real 0m4.634s 00:06:54.571 user 0m1.585s 00:06:54.571 sys 0m0.286s 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.571 18:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.571 ************************************ 00:06:54.571 END TEST locking_overlapped_coremask_via_rpc 00:06:54.571 ************************************ 00:06:54.571 18:13:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.571 18:13:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2832362 ]] 00:06:54.571 18:13:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2832362 00:06:54.571 18:13:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2832362 ']' 00:06:54.571 18:13:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2832362 00:06:54.571 18:13:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.571 18:13:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.571 18:13:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832362 00:06:54.829 18:13:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.829 18:13:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.829 18:13:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832362' 00:06:54.829 killing process with pid 2832362 00:06:54.829 18:13:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2832362 00:06:54.829 18:13:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2832362 00:06:57.356 18:13:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2832506 ]] 00:06:57.356 18:13:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2832506 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2832506 ']' 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2832506 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832506 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832506' 00:06:57.356 killing process with pid 2832506 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2832506 00:06:57.356 18:13:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2832506 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2832362 ]] 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2832362 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2832362 ']' 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2832362 00:06:59.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2832362) - No such process 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2832362 is not found' 00:06:59.255 Process with pid 2832362 is not found 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2832506 ]] 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2832506 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2832506 ']' 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2832506 00:06:59.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2832506) - No such process 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2832506 is not found' 00:06:59.255 Process with pid 2832506 is not found 00:06:59.255 18:13:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.255 00:06:59.255 real 0m50.998s 00:06:59.255 user 1m27.481s 00:06:59.255 sys 0m7.657s 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.255 18:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.255 ************************************ 00:06:59.255 END TEST cpu_locks 00:06:59.255 ************************************ 00:06:59.255 00:06:59.255 real 1m20.523s 00:06:59.255 user 2m25.728s 00:06:59.255 sys 0m12.209s 00:06:59.255 18:13:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.255 18:13:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.255 ************************************ 00:06:59.255 END TEST event 00:06:59.255 ************************************ 00:06:59.255 18:13:57 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.255 18:13:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.255 18:13:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.255 18:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.255 ************************************ 00:06:59.255 START TEST thread 00:06:59.255 ************************************ 00:06:59.255 18:13:57 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.255 * Looking for test storage... 00:06:59.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:59.255 18:13:57 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.255 18:13:57 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.255 18:13:57 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.255 18:13:57 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.255 18:13:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.255 18:13:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.255 18:13:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.255 18:13:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.255 18:13:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.255 18:13:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.256 18:13:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.256 18:13:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.256 18:13:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.256 18:13:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.256 18:13:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.256 18:13:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:59.256 18:13:57 thread -- scripts/common.sh@345 -- # : 1 00:06:59.256 18:13:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.256 18:13:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.256 18:13:57 thread -- scripts/common.sh@365 -- # decimal 1 00:06:59.256 18:13:57 thread -- scripts/common.sh@353 -- # local d=1 00:06:59.256 18:13:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.256 18:13:57 thread -- scripts/common.sh@355 -- # echo 1 00:06:59.256 18:13:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.256 18:13:57 thread -- scripts/common.sh@366 -- # decimal 2 00:06:59.256 18:13:57 thread -- scripts/common.sh@353 -- # local d=2 00:06:59.256 18:13:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.256 18:13:57 thread -- scripts/common.sh@355 -- # echo 2 00:06:59.256 18:13:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.256 18:13:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.256 18:13:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.256 18:13:57 thread -- scripts/common.sh@368 -- # return 0 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.256 --rc genhtml_branch_coverage=1 00:06:59.256 --rc genhtml_function_coverage=1 00:06:59.256 --rc genhtml_legend=1 00:06:59.256 --rc geninfo_all_blocks=1 00:06:59.256 --rc geninfo_unexecuted_blocks=1 00:06:59.256 00:06:59.256 ' 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.256 --rc genhtml_branch_coverage=1 00:06:59.256 --rc genhtml_function_coverage=1 00:06:59.256 --rc genhtml_legend=1 00:06:59.256 --rc geninfo_all_blocks=1 00:06:59.256 --rc geninfo_unexecuted_blocks=1 00:06:59.256 00:06:59.256 ' 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.256 --rc genhtml_branch_coverage=1 00:06:59.256 --rc genhtml_function_coverage=1 00:06:59.256 --rc genhtml_legend=1 00:06:59.256 --rc geninfo_all_blocks=1 00:06:59.256 --rc geninfo_unexecuted_blocks=1 00:06:59.256 00:06:59.256 ' 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.256 --rc genhtml_branch_coverage=1 00:06:59.256 --rc genhtml_function_coverage=1 00:06:59.256 --rc genhtml_legend=1 00:06:59.256 --rc geninfo_all_blocks=1 00:06:59.256 --rc geninfo_unexecuted_blocks=1 00:06:59.256 00:06:59.256 ' 00:06:59.256 18:13:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.256 18:13:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 ************************************ 00:06:59.256 START TEST thread_poller_perf 00:06:59.256 ************************************ 00:06:59.256 18:13:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.514 [2024-11-18 18:13:57.602102] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:59.514 [2024-11-18 18:13:57.602229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833544 ] 00:06:59.514 [2024-11-18 18:13:57.744646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.772 [2024-11-18 18:13:57.883726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.772 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.146 [2024-11-18T17:13:59.483Z] ====================================== 00:07:01.146 [2024-11-18T17:13:59.483Z] busy:2711755857 (cyc) 00:07:01.146 [2024-11-18T17:13:59.483Z] total_run_count: 282000 00:07:01.146 [2024-11-18T17:13:59.483Z] tsc_hz: 2700000000 (cyc) 00:07:01.146 [2024-11-18T17:13:59.483Z] ====================================== 00:07:01.146 [2024-11-18T17:13:59.483Z] poller_cost: 9616 (cyc), 3561 (nsec) 00:07:01.146 00:07:01.146 real 0m1.581s 00:07:01.146 user 0m1.422s 00:07:01.146 sys 0m0.150s 00:07:01.146 18:13:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.146 18:13:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.146 ************************************ 00:07:01.146 END TEST thread_poller_perf 00:07:01.146 ************************************ 00:07:01.146 18:13:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.146 18:13:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:01.146 18:13:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.146 18:13:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.146 ************************************ 00:07:01.146 START TEST thread_poller_perf 00:07:01.146 ************************************ 00:07:01.146 18:13:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.146 [2024-11-18 18:13:59.237743] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:01.146 [2024-11-18 18:13:59.237854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833706 ] 00:07:01.146 [2024-11-18 18:13:59.381287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.404 [2024-11-18 18:13:59.520397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.404 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.779 [2024-11-18T17:14:01.116Z] ====================================== 00:07:02.779 [2024-11-18T17:14:01.116Z] busy:2705165237 (cyc) 00:07:02.779 [2024-11-18T17:14:01.116Z] total_run_count: 3645000 00:07:02.779 [2024-11-18T17:14:01.116Z] tsc_hz: 2700000000 (cyc) 00:07:02.779 [2024-11-18T17:14:01.116Z] ====================================== 00:07:02.779 [2024-11-18T17:14:01.116Z] poller_cost: 742 (cyc), 274 (nsec) 00:07:02.779 00:07:02.779 real 0m1.576s 00:07:02.779 user 0m1.428s 00:07:02.779 sys 0m0.139s 00:07:02.779 18:14:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.779 18:14:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 ************************************ 00:07:02.779 END TEST thread_poller_perf 00:07:02.779 ************************************ 00:07:02.779 18:14:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.779 00:07:02.779 real 0m3.390s 00:07:02.779 user 0m2.977s 00:07:02.779 sys 0m0.410s 00:07:02.779 18:14:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.779 18:14:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 ************************************ 00:07:02.779 END TEST thread 00:07:02.779 ************************************ 00:07:02.779 18:14:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:02.779 18:14:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:02.779 18:14:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.779 18:14:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.779 18:14:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 ************************************ 00:07:02.779 START TEST app_cmdline 00:07:02.779 ************************************ 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:02.779 * Looking for test storage... 00:07:02.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.779 18:14:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 18:14:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:02.779 18:14:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2834036 00:07:02.779 18:14:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:02.779 18:14:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2834036 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2834036 ']' 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.779 18:14:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 [2024-11-18 18:14:01.089223] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:02.779 [2024-11-18 18:14:01.089370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834036 ] 00:07:03.037 [2024-11-18 18:14:01.230543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.037 [2024-11-18 18:14:01.367839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.411 { 00:07:04.411 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:07:04.411 "fields": { 00:07:04.411 "major": 25, 00:07:04.411 "minor": 1, 00:07:04.411 "patch": 0, 00:07:04.411 "suffix": "-pre", 00:07:04.411 "commit": "d47eb51c9" 00:07:04.411 } 00:07:04.411 } 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.411 18:14:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.411 18:14:02 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.669 request: 00:07:04.669 { 00:07:04.669 "method": "env_dpdk_get_mem_stats", 00:07:04.669 "req_id": 1 00:07:04.669 } 00:07:04.669 Got JSON-RPC error response 00:07:04.669 response: 00:07:04.669 { 00:07:04.669 "code": -32601, 00:07:04.669 "message": "Method not found" 00:07:04.669 } 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.669 18:14:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2834036 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2834036 ']' 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2834036 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834036 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834036' 00:07:04.669 killing process with pid 2834036 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 2834036 00:07:04.669 18:14:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 2834036 00:07:07.199 00:07:07.199 real 0m4.487s 00:07:07.199 user 0m4.898s 00:07:07.199 sys 0m0.697s 00:07:07.199 18:14:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.199 18:14:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.199 ************************************ 00:07:07.199 END TEST app_cmdline 00:07:07.199 ************************************ 00:07:07.199 18:14:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.199 18:14:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.199 18:14:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.199 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.199 ************************************ 00:07:07.199 START TEST version 00:07:07.199 ************************************ 00:07:07.199 18:14:05 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.199 * Looking for test storage... 00:07:07.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:07.199 18:14:05 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.199 18:14:05 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.199 18:14:05 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.199 18:14:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.199 18:14:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.199 18:14:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.199 18:14:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.199 18:14:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.199 18:14:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.199 18:14:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.199 18:14:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.199 18:14:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.199 18:14:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.199 18:14:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.199 18:14:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.199 18:14:05 version -- scripts/common.sh@344 -- # case "$op" in 00:07:07.199 18:14:05 version -- scripts/common.sh@345 -- # : 1 00:07:07.200 18:14:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.200 18:14:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.200 18:14:05 version -- scripts/common.sh@365 -- # decimal 1 00:07:07.200 18:14:05 version -- scripts/common.sh@353 -- # local d=1 00:07:07.200 18:14:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.200 18:14:05 version -- scripts/common.sh@355 -- # echo 1 00:07:07.200 18:14:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.200 18:14:05 version -- scripts/common.sh@366 -- # decimal 2 00:07:07.200 18:14:05 version -- scripts/common.sh@353 -- # local d=2 00:07:07.200 18:14:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.200 18:14:05 version -- scripts/common.sh@355 -- # echo 2 00:07:07.200 18:14:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.200 18:14:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.200 18:14:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.200 18:14:05 version -- scripts/common.sh@368 -- # return 0 00:07:07.200 18:14:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.200 18:14:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.200 --rc genhtml_branch_coverage=1 00:07:07.200 --rc genhtml_function_coverage=1 00:07:07.200 --rc genhtml_legend=1 00:07:07.200 --rc geninfo_all_blocks=1 00:07:07.200 --rc geninfo_unexecuted_blocks=1 00:07:07.200 00:07:07.200 ' 00:07:07.200 18:14:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.200 --rc genhtml_branch_coverage=1 00:07:07.200 --rc genhtml_function_coverage=1 00:07:07.200 --rc genhtml_legend=1 00:07:07.200 --rc geninfo_all_blocks=1 00:07:07.200 --rc geninfo_unexecuted_blocks=1 00:07:07.200 00:07:07.200 ' 00:07:07.200 18:14:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.200 --rc genhtml_branch_coverage=1 00:07:07.200 --rc genhtml_function_coverage=1 00:07:07.200 --rc genhtml_legend=1 00:07:07.200 --rc geninfo_all_blocks=1 00:07:07.200 --rc geninfo_unexecuted_blocks=1 00:07:07.200 00:07:07.200 ' 00:07:07.200 18:14:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.200 --rc genhtml_branch_coverage=1 00:07:07.200 --rc genhtml_function_coverage=1 00:07:07.200 --rc genhtml_legend=1 00:07:07.200 --rc geninfo_all_blocks=1 00:07:07.200 --rc geninfo_unexecuted_blocks=1 00:07:07.200 00:07:07.200 ' 00:07:07.200 18:14:05 version -- app/version.sh@17 -- # get_header_version major 00:07:07.200 18:14:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.200 18:14:05 version -- app/version.sh@14 -- # cut -f2 00:07:07.200 18:14:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.459 18:14:05 version -- app/version.sh@17 -- # major=25 00:07:07.459 18:14:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:07.459 18:14:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # cut -f2 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.459 18:14:05 version -- app/version.sh@18 -- # minor=1 00:07:07.459 18:14:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:07.459 18:14:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # cut -f2 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.459 18:14:05 version -- app/version.sh@19 -- # patch=0 00:07:07.459 18:14:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:07.459 18:14:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # cut -f2 00:07:07.459 18:14:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.459 18:14:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.459 18:14:05 version -- app/version.sh@22 -- # version=25.1 00:07:07.459 18:14:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.459 18:14:05 version -- app/version.sh@28 -- # version=25.1rc0 00:07:07.459 18:14:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.459 18:14:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.459 18:14:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:07.459 18:14:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:07.459 00:07:07.459 real 0m0.207s 00:07:07.459 user 0m0.141s 00:07:07.459 sys 0m0.090s 00:07:07.459 18:14:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.459 18:14:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:07.459 ************************************ 00:07:07.459 END TEST version 00:07:07.459 ************************************ 00:07:07.459 18:14:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:07.459 18:14:05 -- spdk/autotest.sh@194 -- # uname -s 00:07:07.459 18:14:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:07.459 18:14:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:07.459 18:14:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:07.459 18:14:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:07.459 18:14:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.459 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.459 18:14:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:07.459 18:14:05 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:07.459 18:14:05 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.459 18:14:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.459 18:14:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.459 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.459 ************************************ 00:07:07.459 START TEST nvmf_tcp 00:07:07.459 ************************************ 00:07:07.459 18:14:05 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.459 * Looking for test storage... 00:07:07.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.459 18:14:05 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.459 18:14:05 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.459 18:14:05 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.459 18:14:05 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.459 18:14:05 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.459 18:14:05 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.459 18:14:05 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.459 18:14:05 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.459 18:14:05 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.460 18:14:05 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.460 --rc genhtml_branch_coverage=1 00:07:07.460 --rc genhtml_function_coverage=1 00:07:07.460 --rc genhtml_legend=1 00:07:07.460 --rc geninfo_all_blocks=1 00:07:07.460 --rc geninfo_unexecuted_blocks=1 00:07:07.460 00:07:07.460 ' 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.460 --rc genhtml_branch_coverage=1 00:07:07.460 --rc genhtml_function_coverage=1 00:07:07.460 --rc genhtml_legend=1 00:07:07.460 --rc geninfo_all_blocks=1 00:07:07.460 --rc geninfo_unexecuted_blocks=1 00:07:07.460 00:07:07.460 ' 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.460 --rc genhtml_branch_coverage=1 00:07:07.460 --rc genhtml_function_coverage=1 00:07:07.460 --rc genhtml_legend=1 00:07:07.460 --rc geninfo_all_blocks=1 00:07:07.460 --rc geninfo_unexecuted_blocks=1 00:07:07.460 00:07:07.460 ' 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.460 --rc genhtml_branch_coverage=1 00:07:07.460 --rc genhtml_function_coverage=1 00:07:07.460 --rc genhtml_legend=1 00:07:07.460 --rc geninfo_all_blocks=1 00:07:07.460 --rc geninfo_unexecuted_blocks=1 00:07:07.460 00:07:07.460 ' 00:07:07.460 18:14:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.460 18:14:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.460 18:14:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.460 18:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.718 ************************************ 00:07:07.719 START TEST nvmf_target_core 00:07:07.719 ************************************ 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.719 * Looking for test storage... 00:07:07.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.719 --rc genhtml_branch_coverage=1 00:07:07.719 --rc genhtml_function_coverage=1 00:07:07.719 --rc genhtml_legend=1 00:07:07.719 --rc geninfo_all_blocks=1 00:07:07.719 --rc geninfo_unexecuted_blocks=1 00:07:07.719 00:07:07.719 ' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.719 --rc genhtml_branch_coverage=1 00:07:07.719 --rc genhtml_function_coverage=1 00:07:07.719 --rc genhtml_legend=1 00:07:07.719 --rc geninfo_all_blocks=1 00:07:07.719 --rc geninfo_unexecuted_blocks=1 00:07:07.719 00:07:07.719 ' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.719 --rc genhtml_branch_coverage=1 00:07:07.719 --rc genhtml_function_coverage=1 00:07:07.719 --rc genhtml_legend=1 00:07:07.719 --rc geninfo_all_blocks=1 00:07:07.719 --rc geninfo_unexecuted_blocks=1 00:07:07.719 00:07:07.719 ' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.719 --rc genhtml_branch_coverage=1 00:07:07.719 --rc genhtml_function_coverage=1 00:07:07.719 --rc genhtml_legend=1 00:07:07.719 --rc geninfo_all_blocks=1 00:07:07.719 --rc geninfo_unexecuted_blocks=1 00:07:07.719 00:07:07.719 ' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.719 ************************************ 00:07:07.719 START TEST nvmf_abort 00:07:07.719 ************************************ 00:07:07.719 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:07.719 * Looking for test storage... 00:07:07.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.719 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.719 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.719 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.978 --rc genhtml_branch_coverage=1 00:07:07.978 --rc genhtml_function_coverage=1 00:07:07.978 --rc genhtml_legend=1 00:07:07.978 --rc geninfo_all_blocks=1 00:07:07.978 --rc geninfo_unexecuted_blocks=1 00:07:07.978 00:07:07.978 ' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.978 --rc genhtml_branch_coverage=1 00:07:07.978 --rc genhtml_function_coverage=1 00:07:07.978 --rc genhtml_legend=1 00:07:07.978 --rc geninfo_all_blocks=1 00:07:07.978 --rc geninfo_unexecuted_blocks=1 00:07:07.978 00:07:07.978 ' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.978 --rc genhtml_branch_coverage=1 00:07:07.978 --rc genhtml_function_coverage=1 00:07:07.978 --rc genhtml_legend=1 00:07:07.978 --rc geninfo_all_blocks=1 00:07:07.978 --rc geninfo_unexecuted_blocks=1 00:07:07.978 00:07:07.978 ' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.978 --rc genhtml_branch_coverage=1 00:07:07.978 --rc genhtml_function_coverage=1 00:07:07.978 --rc genhtml_legend=1 00:07:07.978 --rc geninfo_all_blocks=1 00:07:07.978 --rc geninfo_unexecuted_blocks=1 00:07:07.978 00:07:07.978 ' 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.978 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.979 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.511 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:07:10.512 00:07:10.512 --- 10.0.0.2 ping statistics --- 00:07:10.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.512 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:10.512 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:10.513 00:07:10.513 --- 10.0.0.1 ping statistics --- 00:07:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.513 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2836425 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2836425 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2836425 ']' 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.513 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 [2024-11-18 18:14:08.600225] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:10.513 [2024-11-18 18:14:08.600384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.513 [2024-11-18 18:14:08.769715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.819 [2024-11-18 18:14:08.919537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.819 [2024-11-18 18:14:08.919631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.819 [2024-11-18 18:14:08.919668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.819 [2024-11-18 18:14:08.919694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.819 [2024-11-18 18:14:08.919715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.820 [2024-11-18 18:14:08.922439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.820 [2024-11-18 18:14:08.922492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.820 [2024-11-18 18:14:08.922498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 [2024-11-18 18:14:09.551339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 Malloc0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 Delay0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.432 [2024-11-18 18:14:09.676310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.432 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.433 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.433 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.433 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:11.691 [2024-11-18 18:14:09.874802] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:14.222 Initializing NVMe Controllers 00:07:14.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:14.222 controller IO queue size 128 less than required 00:07:14.222 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:14.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:14.222 Initialization complete. Launching workers. 00:07:14.222 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 19937 00:07:14.222 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 19994, failed to submit 66 00:07:14.222 success 19937, unsuccessful 57, failed 0 00:07:14.222 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:14.222 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.222 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:14.222 rmmod nvme_tcp 00:07:14.222 rmmod nvme_fabrics 00:07:14.222 rmmod nvme_keyring 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2836425 ']' 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2836425 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2836425 ']' 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2836425 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836425 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836425' 00:07:14.222 killing process with pid 2836425 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2836425 00:07:14.222 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2836425 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.157 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.059 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:17.059 00:07:17.059 real 0m9.367s 00:07:17.059 user 0m15.341s 00:07:17.059 sys 0m2.713s 00:07:17.059 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.060 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.060 ************************************ 00:07:17.060 END TEST nvmf_abort 00:07:17.060 ************************************ 00:07:17.060 18:14:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:17.060 18:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.060 18:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.060 18:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.318 ************************************ 00:07:17.318 START TEST nvmf_ns_hotplug_stress 00:07:17.318 ************************************ 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:17.318 * Looking for test storage... 00:07:17.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.318 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.319 --rc genhtml_branch_coverage=1 00:07:17.319 --rc genhtml_function_coverage=1 00:07:17.319 --rc genhtml_legend=1 00:07:17.319 --rc geninfo_all_blocks=1 00:07:17.319 --rc geninfo_unexecuted_blocks=1 00:07:17.319 00:07:17.319 ' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.319 --rc genhtml_branch_coverage=1 00:07:17.319 --rc genhtml_function_coverage=1 00:07:17.319 --rc genhtml_legend=1 00:07:17.319 --rc geninfo_all_blocks=1 00:07:17.319 --rc geninfo_unexecuted_blocks=1 00:07:17.319 00:07:17.319 ' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.319 --rc genhtml_branch_coverage=1 00:07:17.319 --rc genhtml_function_coverage=1 00:07:17.319 --rc genhtml_legend=1 00:07:17.319 --rc geninfo_all_blocks=1 00:07:17.319 --rc geninfo_unexecuted_blocks=1 00:07:17.319 00:07:17.319 ' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.319 --rc genhtml_branch_coverage=1 00:07:17.319 --rc genhtml_function_coverage=1 00:07:17.319 --rc genhtml_legend=1 00:07:17.319 --rc geninfo_all_blocks=1 00:07:17.319 --rc geninfo_unexecuted_blocks=1 00:07:17.319 00:07:17.319 ' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:17.319 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:17.320 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:19.848 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:19.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:19.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:19.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:19.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:19.849 00:07:19.849 --- 10.0.0.2 ping statistics --- 00:07:19.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.849 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:19.849 00:07:19.849 --- 10.0.0.1 ping statistics --- 00:07:19.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.849 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.849 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2839023 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2839023 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2839023 ']' 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.850 18:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.850 [2024-11-18 18:14:17.840546] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:19.850 [2024-11-18 18:14:17.840715] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.850 [2024-11-18 18:14:18.014799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.850 [2024-11-18 18:14:18.154669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.850 [2024-11-18 18:14:18.154740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.850 [2024-11-18 18:14:18.154765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.850 [2024-11-18 18:14:18.154788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.850 [2024-11-18 18:14:18.154805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.850 [2024-11-18 18:14:18.157065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.850 [2024-11-18 18:14:18.157107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.850 [2024-11-18 18:14:18.157112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:20.784 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:21.048 [2024-11-18 18:14:19.146038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.048 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:21.305 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.562 [2024-11-18 18:14:19.711978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.562 18:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.820 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:22.078 Malloc0 00:07:22.079 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:22.336 Delay0 00:07:22.336 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.595 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:22.852 NULL1 00:07:22.853 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:23.110 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2839459 00:07:23.110 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:23.110 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:23.110 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.481 Read completed with error (sct=0, sc=11) 00:07:24.481 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.738 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:24.738 18:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:24.995 true 00:07:24.995 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:24.995 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.926 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.184 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:26.184 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:26.442 true 00:07:26.442 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:26.442 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.700 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.957 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:26.957 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:27.215 true 00:07:27.215 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:27.215 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.472 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.730 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:27.730 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:27.987 true 00:07:27.987 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:27.987 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.920 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.178 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:29.178 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:29.436 true 00:07:29.436 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:29.436 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.693 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.950 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:29.950 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:30.207 true 00:07:30.207 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:30.207 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.139 18:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.397 18:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:31.397 18:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:31.654 true 00:07:31.654 18:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:31.654 18:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.913 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.479 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:32.479 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:32.479 true 00:07:32.479 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:32.479 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.737 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.303 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:33.303 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:33.303 true 00:07:33.303 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:33.303 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.236 18:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.494 18:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:34.494 18:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:34.752 true 00:07:34.752 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:34.752 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.010 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.268 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:35.268 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:35.526 true 00:07:35.526 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:35.526 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.784 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.042 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:36.042 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:36.300 true 00:07:36.559 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:36.559 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.493 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.751 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:37.751 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:38.020 true 00:07:38.020 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:38.020 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.284 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.542 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:38.542 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:38.800 true 00:07:38.800 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:38.800 18:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.058 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.316 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:39.316 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:39.574 true 00:07:39.574 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:39.574 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.507 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.766 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:40.766 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:41.024 true 00:07:41.024 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:41.024 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.282 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.539 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:41.539 18:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:41.797 true 00:07:42.054 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:42.054 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.313 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.609 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:42.609 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:42.886 true 00:07:42.886 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:42.886 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.819 18:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.819 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:43.819 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:44.077 true 00:07:44.077 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:44.077 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.334 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.592 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:44.592 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:44.849 true 00:07:44.849 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:44.849 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.107 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.672 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:45.672 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:45.672 true 00:07:45.672 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:45.672 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.605 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.862 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:46.862 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:47.427 true 00:07:47.427 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:47.427 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.685 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.942 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:47.942 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:48.200 true 00:07:48.200 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:48.200 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.458 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.716 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:48.716 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:48.973 true 00:07:48.974 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:48.974 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.906 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.164 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:50.164 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:50.422 true 00:07:50.422 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:50.422 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.680 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.937 18:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:50.937 18:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:51.194 true 00:07:51.194 18:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:51.194 18:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.126 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.384 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:52.384 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:52.641 true 00:07:52.641 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:52.641 18:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.899 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.156 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:53.156 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:53.414 true 00:07:53.414 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:53.414 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.414 Initializing NVMe Controllers 00:07:53.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.414 Controller IO queue size 128, less than required. 00:07:53.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.414 Controller IO queue size 128, less than required. 00:07:53.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:53.414 Initialization complete. Launching workers. 00:07:53.414 ======================================================== 00:07:53.414 Latency(us) 00:07:53.414 Device Information : IOPS MiB/s Average min max 00:07:53.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 544.16 0.27 97892.06 3724.23 1026045.83 00:07:53.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6793.44 3.32 18843.83 4561.42 481854.09 00:07:53.414 ======================================================== 00:07:53.414 Total : 7337.61 3.58 24706.10 3724.23 1026045.83 00:07:53.414 00:07:53.672 18:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.929 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:53.929 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:54.186 true 00:07:54.186 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2839459 00:07:54.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2839459) - No such process 00:07:54.186 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2839459 00:07:54.186 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.444 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.702 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:54.702 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:54.702 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:54.702 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.702 18:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:54.959 null0 00:07:54.959 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.959 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.959 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:55.217 null1 00:07:55.217 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.217 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.217 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:55.475 null2 00:07:55.475 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.475 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.475 18:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:55.733 null3 00:07:55.733 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.733 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.733 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:55.990 null4 00:07:56.247 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.247 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.247 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:56.505 null5 00:07:56.505 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.505 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.505 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:56.762 null6 00:07:56.762 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.762 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.762 18:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:57.020 null7 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.020 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2843544 2843545 2843547 2843549 2843551 2843553 2843555 2843557 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.021 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.280 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.538 18:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.797 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.056 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.314 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.314 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.314 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.572 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.572 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.572 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.572 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.572 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.830 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.088 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.346 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.605 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.863 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.863 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.863 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.863 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.863 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.864 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.122 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.122 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.122 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.380 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.380 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.380 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.380 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.380 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.638 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.639 18:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.897 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.156 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.414 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.672 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.672 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.673 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.239 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.497 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.755 18:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.014 rmmod nvme_tcp 00:08:03.014 rmmod nvme_fabrics 00:08:03.014 rmmod nvme_keyring 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2839023 ']' 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2839023 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2839023 ']' 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2839023 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2839023 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2839023' 00:08:03.014 killing process with pid 2839023 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2839023 00:08:03.014 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2839023 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.435 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.366 00:08:06.366 real 0m49.125s 00:08:06.366 user 3m45.371s 00:08:06.366 sys 0m16.110s 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:06.366 ************************************ 00:08:06.366 END TEST nvmf_ns_hotplug_stress 00:08:06.366 ************************************ 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.366 ************************************ 00:08:06.366 START TEST nvmf_delete_subsystem 00:08:06.366 ************************************ 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:06.366 * Looking for test storage... 00:08:06.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.366 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.624 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.625 --rc genhtml_branch_coverage=1 00:08:06.625 --rc genhtml_function_coverage=1 00:08:06.625 --rc genhtml_legend=1 00:08:06.625 --rc geninfo_all_blocks=1 00:08:06.625 --rc geninfo_unexecuted_blocks=1 00:08:06.625 00:08:06.625 ' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.625 --rc genhtml_branch_coverage=1 00:08:06.625 --rc genhtml_function_coverage=1 00:08:06.625 --rc genhtml_legend=1 00:08:06.625 --rc geninfo_all_blocks=1 00:08:06.625 --rc geninfo_unexecuted_blocks=1 00:08:06.625 00:08:06.625 ' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.625 --rc genhtml_branch_coverage=1 00:08:06.625 --rc genhtml_function_coverage=1 00:08:06.625 --rc genhtml_legend=1 00:08:06.625 --rc geninfo_all_blocks=1 00:08:06.625 --rc geninfo_unexecuted_blocks=1 00:08:06.625 00:08:06.625 ' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.625 --rc genhtml_branch_coverage=1 00:08:06.625 --rc genhtml_function_coverage=1 00:08:06.625 --rc genhtml_legend=1 00:08:06.625 --rc geninfo_all_blocks=1 00:08:06.625 --rc geninfo_unexecuted_blocks=1 00:08:06.625 00:08:06.625 ' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.625 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.626 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.626 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.626 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.526 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.526 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:08:08.785 00:08:08.785 --- 10.0.0.2 ping statistics --- 00:08:08.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.785 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:08:08.785 00:08:08.785 --- 10.0.0.1 ping statistics --- 00:08:08.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.785 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.785 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2846694 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2846694 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2846694 ']' 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.786 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.786 [2024-11-18 18:15:07.067405] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:08.786 [2024-11-18 18:15:07.067551] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.045 [2024-11-18 18:15:07.223077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.045 [2024-11-18 18:15:07.361304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.045 [2024-11-18 18:15:07.361396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.045 [2024-11-18 18:15:07.361423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.045 [2024-11-18 18:15:07.361457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.045 [2024-11-18 18:15:07.361478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.045 [2024-11-18 18:15:07.364133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.045 [2024-11-18 18:15:07.364134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 [2024-11-18 18:15:08.075726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 [2024-11-18 18:15:08.093539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 NULL1 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 Delay0 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2847133 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:09.979 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.979 [2024-11-18 18:15:08.237973] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.878 18:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.878 18:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.878 18:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 starting I/O failed: -6 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 [2024-11-18 18:15:10.418047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Write completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.136 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 starting I/O failed: -6 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 [2024-11-18 18:15:10.419962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Read completed with error (sct=0, sc=8) 00:08:12.137 Write completed with error (sct=0, sc=8) 00:08:13.071 [2024-11-18 18:15:11.379114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.329 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 [2024-11-18 18:15:11.421252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 [2024-11-18 18:15:11.422671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 [2024-11-18 18:15:11.424224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Write completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 Read completed with error (sct=0, sc=8) 00:08:13.330 [2024-11-18 18:15:11.427998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:13.330 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.330 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:13.330 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2847133 00:08:13.330 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:13.330 Initializing NVMe Controllers 00:08:13.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.330 Controller IO queue size 128, less than required. 00:08:13.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:13.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:13.331 Initialization complete. Launching workers. 00:08:13.331 ======================================================== 00:08:13.331 Latency(us) 00:08:13.331 Device Information : IOPS MiB/s Average min max 00:08:13.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.42 0.09 883590.13 938.18 1015114.76 00:08:13.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.45 0.08 890072.23 842.71 1017223.65 00:08:13.331 ======================================================== 00:08:13.331 Total : 349.87 0.17 886803.64 842.71 1017223.65 00:08:13.331 00:08:13.331 [2024-11-18 18:15:11.429549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:13.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2847133 00:08:13.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2847133) - No such process 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2847133 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2847133 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2847133 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.897 [2024-11-18 18:15:11.949530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.897 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2847764 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:13.898 18:15:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.898 [2024-11-18 18:15:12.063148] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:14.156 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.156 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:14.156 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.720 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.720 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:14.720 18:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.285 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.285 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:15.285 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.855 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.855 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:15.855 18:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.421 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.421 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:16.421 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.679 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.679 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:16.679 18:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.937 Initializing NVMe Controllers 00:08:16.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.937 Controller IO queue size 128, less than required. 00:08:16.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.937 Initialization complete. Launching workers. 00:08:16.937 ======================================================== 00:08:16.937 Latency(us) 00:08:16.937 Device Information : IOPS MiB/s Average min max 00:08:16.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005109.43 1000212.48 1015130.13 00:08:16.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005720.60 1000264.61 1015904.97 00:08:16.937 ======================================================== 00:08:16.937 Total : 256.00 0.12 1005415.01 1000212.48 1015904.97 00:08:16.937 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2847764 00:08:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2847764) - No such process 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2847764 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.195 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.195 rmmod nvme_tcp 00:08:17.195 rmmod nvme_fabrics 00:08:17.195 rmmod nvme_keyring 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2846694 ']' 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2846694 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2846694 ']' 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2846694 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846694 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846694' 00:08:17.453 killing process with pid 2846694 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2846694 00:08:17.453 18:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2846694 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.388 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.918 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.918 00:08:20.919 real 0m14.181s 00:08:20.919 user 0m30.993s 00:08:20.919 sys 0m3.200s 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.919 ************************************ 00:08:20.919 END TEST nvmf_delete_subsystem 00:08:20.919 ************************************ 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.919 ************************************ 00:08:20.919 START TEST nvmf_host_management 00:08:20.919 ************************************ 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.919 * Looking for test storage... 00:08:20.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.919 --rc genhtml_branch_coverage=1 00:08:20.919 --rc genhtml_function_coverage=1 00:08:20.919 --rc genhtml_legend=1 00:08:20.919 --rc geninfo_all_blocks=1 00:08:20.919 --rc geninfo_unexecuted_blocks=1 00:08:20.919 00:08:20.919 ' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.919 --rc genhtml_branch_coverage=1 00:08:20.919 --rc genhtml_function_coverage=1 00:08:20.919 --rc genhtml_legend=1 00:08:20.919 --rc geninfo_all_blocks=1 00:08:20.919 --rc geninfo_unexecuted_blocks=1 00:08:20.919 00:08:20.919 ' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.919 --rc genhtml_branch_coverage=1 00:08:20.919 --rc genhtml_function_coverage=1 00:08:20.919 --rc genhtml_legend=1 00:08:20.919 --rc geninfo_all_blocks=1 00:08:20.919 --rc geninfo_unexecuted_blocks=1 00:08:20.919 00:08:20.919 ' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.919 --rc genhtml_branch_coverage=1 00:08:20.919 --rc genhtml_function_coverage=1 00:08:20.919 --rc genhtml_legend=1 00:08:20.919 --rc geninfo_all_blocks=1 00:08:20.919 --rc geninfo_unexecuted_blocks=1 00:08:20.919 00:08:20.919 ' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.919 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.920 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.821 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.822 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:22.822 00:08:22.822 --- 10.0.0.2 ping statistics --- 00:08:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.822 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:22.822 00:08:22.822 --- 10.0.0.1 ping statistics --- 00:08:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.822 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2850247 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2850247 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2850247 ']' 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.822 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.081 [2024-11-18 18:15:21.182042] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:23.081 [2024-11-18 18:15:21.182190] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.081 [2024-11-18 18:15:21.341846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.339 [2024-11-18 18:15:21.486934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.339 [2024-11-18 18:15:21.487020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.339 [2024-11-18 18:15:21.487045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.339 [2024-11-18 18:15:21.487069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.339 [2024-11-18 18:15:21.487088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.339 [2024-11-18 18:15:21.489979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.339 [2024-11-18 18:15:21.490081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.339 [2024-11-18 18:15:21.490127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.339 [2024-11-18 18:15:21.490134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.905 [2024-11-18 18:15:22.154845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.905 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.163 Malloc0 00:08:24.163 [2024-11-18 18:15:22.279713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2850421 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2850421 /var/tmp/bdevperf.sock 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2850421 ']' 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.163 { 00:08:24.163 "params": { 00:08:24.163 "name": "Nvme$subsystem", 00:08:24.163 "trtype": "$TEST_TRANSPORT", 00:08:24.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.163 "adrfam": "ipv4", 00:08:24.163 "trsvcid": "$NVMF_PORT", 00:08:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.163 "hdgst": ${hdgst:-false}, 00:08:24.163 "ddgst": ${ddgst:-false} 00:08:24.163 }, 00:08:24.163 "method": "bdev_nvme_attach_controller" 00:08:24.163 } 00:08:24.163 EOF 00:08:24.163 )") 00:08:24.163 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:24.164 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:24.164 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:24.164 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.164 "params": { 00:08:24.164 "name": "Nvme0", 00:08:24.164 "trtype": "tcp", 00:08:24.164 "traddr": "10.0.0.2", 00:08:24.164 "adrfam": "ipv4", 00:08:24.164 "trsvcid": "4420", 00:08:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.164 "hdgst": false, 00:08:24.164 "ddgst": false 00:08:24.164 }, 00:08:24.164 "method": "bdev_nvme_attach_controller" 00:08:24.164 }' 00:08:24.164 [2024-11-18 18:15:22.392041] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:24.164 [2024-11-18 18:15:22.392167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850421 ] 00:08:24.422 [2024-11-18 18:15:22.539957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.422 [2024-11-18 18:15:22.668578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.987 Running I/O for 10 seconds... 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.247 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.247 [2024-11-18 18:15:23.428119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.428971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.428996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.429018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.429042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.429065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.429095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.429118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.429143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.247 [2024-11-18 18:15:23.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.247 [2024-11-18 18:15:23.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.429969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.429991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.430956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.430981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.431003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.431028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.248 [2024-11-18 18:15:23.431049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.248 [2024-11-18 18:15:23.431074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.249 [2024-11-18 18:15:23.431333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:08:25.249 [2024-11-18 18:15:23.431809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.249 [2024-11-18 18:15:23.431841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.249 [2024-11-18 18:15:23.431887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.249 [2024-11-18 18:15:23.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.431969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.249 [2024-11-18 18:15:23.431991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.249 [2024-11-18 18:15:23.432011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.249 [2024-11-18 18:15:23.433241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:25.249 task offset: 32768 on job bdev=Nvme0n1 fails 00:08:25.249 00:08:25.249 Latency(us) 00:08:25.249 [2024-11-18T17:15:23.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.249 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:25.249 Job: Nvme0n1 ended in about 0.21 seconds with error 00:08:25.249 Verification LBA range: start 0x0 length 0x400 00:08:25.249 Nvme0n1 : 0.21 1212.30 75.77 303.08 0.00 40030.97 5849.69 40583.77 00:08:25.249 [2024-11-18T17:15:23.586Z] =================================================================================================================== 00:08:25.249 [2024-11-18T17:15:23.586Z] Total : 1212.30 75.77 303.08 0.00 40030.97 5849.69 40583.77 00:08:25.249 [2024-11-18 18:15:23.438184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.249 [2024-11-18 18:15:23.438240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.249 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:25.249 [2024-11-18 18:15:23.530819] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2850421 00:08:26.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2850421) - No such process 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.183 { 00:08:26.183 "params": { 00:08:26.183 "name": "Nvme$subsystem", 00:08:26.183 "trtype": "$TEST_TRANSPORT", 00:08:26.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.183 "adrfam": "ipv4", 00:08:26.183 "trsvcid": "$NVMF_PORT", 00:08:26.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.183 "hdgst": ${hdgst:-false}, 00:08:26.183 "ddgst": ${ddgst:-false} 00:08:26.183 }, 00:08:26.183 "method": "bdev_nvme_attach_controller" 00:08:26.183 } 00:08:26.183 EOF 00:08:26.183 )") 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:26.183 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.183 "params": { 00:08:26.183 "name": "Nvme0", 00:08:26.183 "trtype": "tcp", 00:08:26.183 "traddr": "10.0.0.2", 00:08:26.183 "adrfam": "ipv4", 00:08:26.183 "trsvcid": "4420", 00:08:26.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:26.183 "hdgst": false, 00:08:26.183 "ddgst": false 00:08:26.183 }, 00:08:26.183 "method": "bdev_nvme_attach_controller" 00:08:26.183 }' 00:08:26.441 [2024-11-18 18:15:24.527248] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:26.441 [2024-11-18 18:15:24.527381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850698 ] 00:08:26.441 [2024-11-18 18:15:24.664100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.699 [2024-11-18 18:15:24.793411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.265 Running I/O for 1 seconds... 00:08:28.200 1344.00 IOPS, 84.00 MiB/s 00:08:28.200 Latency(us) 00:08:28.200 [2024-11-18T17:15:26.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.200 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:28.200 Verification LBA range: start 0x0 length 0x400 00:08:28.200 Nvme0n1 : 1.01 1392.89 87.06 0.00 0.00 45155.18 8738.13 40389.59 00:08:28.200 [2024-11-18T17:15:26.537Z] =================================================================================================================== 00:08:28.200 [2024-11-18T17:15:26.537Z] Total : 1392.89 87.06 0.00 0.00 45155.18 8738.13 40389.59 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.135 rmmod nvme_tcp 00:08:29.135 rmmod nvme_fabrics 00:08:29.135 rmmod nvme_keyring 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2850247 ']' 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2850247 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2850247 ']' 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2850247 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850247 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850247' 00:08:29.135 killing process with pid 2850247 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2850247 00:08:29.135 18:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2850247 00:08:30.509 [2024-11-18 18:15:28.480390] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.509 18:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.409 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.409 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:32.409 00:08:32.409 real 0m11.797s 00:08:32.409 user 0m32.293s 00:08:32.409 sys 0m3.108s 00:08:32.409 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.409 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.409 ************************************ 00:08:32.409 END TEST nvmf_host_management 00:08:32.409 ************************************ 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.410 ************************************ 00:08:32.410 START TEST nvmf_lvol 00:08:32.410 ************************************ 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.410 * Looking for test storage... 00:08:32.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.410 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:32.670 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.671 --rc genhtml_branch_coverage=1 00:08:32.671 --rc genhtml_function_coverage=1 00:08:32.671 --rc genhtml_legend=1 00:08:32.671 --rc geninfo_all_blocks=1 00:08:32.671 --rc geninfo_unexecuted_blocks=1 00:08:32.671 00:08:32.671 ' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.671 --rc genhtml_branch_coverage=1 00:08:32.671 --rc genhtml_function_coverage=1 00:08:32.671 --rc genhtml_legend=1 00:08:32.671 --rc geninfo_all_blocks=1 00:08:32.671 --rc geninfo_unexecuted_blocks=1 00:08:32.671 00:08:32.671 ' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.671 --rc genhtml_branch_coverage=1 00:08:32.671 --rc genhtml_function_coverage=1 00:08:32.671 --rc genhtml_legend=1 00:08:32.671 --rc geninfo_all_blocks=1 00:08:32.671 --rc geninfo_unexecuted_blocks=1 00:08:32.671 00:08:32.671 ' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.671 --rc genhtml_branch_coverage=1 00:08:32.671 --rc genhtml_function_coverage=1 00:08:32.671 --rc genhtml_legend=1 00:08:32.671 --rc geninfo_all_blocks=1 00:08:32.671 --rc geninfo_unexecuted_blocks=1 00:08:32.671 00:08:32.671 ' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.671 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.672 18:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:34.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:34.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:34.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:34.623 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:34.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.624 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.882 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.882 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.882 18:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:34.882 00:08:34.882 --- 10.0.0.2 ping statistics --- 00:08:34.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.882 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:08:34.882 00:08:34.882 --- 10.0.0.1 ping statistics --- 00:08:34.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.882 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2853175 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2853175 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2853175 ']' 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.882 18:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.882 [2024-11-18 18:15:33.129439] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:34.882 [2024-11-18 18:15:33.129582] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.140 [2024-11-18 18:15:33.275256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:35.140 [2024-11-18 18:15:33.412858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.140 [2024-11-18 18:15:33.412948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.140 [2024-11-18 18:15:33.412974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.140 [2024-11-18 18:15:33.412999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.140 [2024-11-18 18:15:33.413029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.140 [2024-11-18 18:15:33.415732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.140 [2024-11-18 18:15:33.415802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.140 [2024-11-18 18:15:33.415807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.072 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:36.330 [2024-11-18 18:15:34.431994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.330 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.588 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:36.588 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.846 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:36.846 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:37.415 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:37.683 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f0665415-47ac-4318-9d67-46d8495835e6 00:08:37.683 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0665415-47ac-4318-9d67-46d8495835e6 lvol 20 00:08:37.945 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=919fdfd0-ffa3-4760-bde5-46d49d6ebaca 00:08:37.946 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.218 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 919fdfd0-ffa3-4760-bde5-46d49d6ebaca 00:08:38.480 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.737 [2024-11-18 18:15:36.968783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.737 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.995 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2853739 00:08:38.995 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:38.995 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:40.367 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 919fdfd0-ffa3-4760-bde5-46d49d6ebaca MY_SNAPSHOT 00:08:40.367 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4397ea7d-4220-4b75-9784-d9c39c82347c 00:08:40.367 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 919fdfd0-ffa3-4760-bde5-46d49d6ebaca 30 00:08:40.932 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4397ea7d-4220-4b75-9784-d9c39c82347c MY_CLONE 00:08:41.190 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=626a01a0-c45e-4e9f-acb6-0870bc6387b0 00:08:41.190 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 626a01a0-c45e-4e9f-acb6-0870bc6387b0 00:08:42.123 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2853739 00:08:50.232 Initializing NVMe Controllers 00:08:50.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.233 Controller IO queue size 128, less than required. 00:08:50.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:50.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:50.233 Initialization complete. Launching workers. 00:08:50.233 ======================================================== 00:08:50.233 Latency(us) 00:08:50.233 Device Information : IOPS MiB/s Average min max 00:08:50.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8270.78 32.31 15491.96 316.24 142948.39 00:08:50.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8111.09 31.68 15780.29 3479.09 161680.05 00:08:50.233 ======================================================== 00:08:50.233 Total : 16381.87 63.99 15634.72 316.24 161680.05 00:08:50.233 00:08:50.233 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.233 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 919fdfd0-ffa3-4760-bde5-46d49d6ebaca 00:08:50.233 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0665415-47ac-4318-9d67-46d8495835e6 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.491 rmmod nvme_tcp 00:08:50.491 rmmod nvme_fabrics 00:08:50.491 rmmod nvme_keyring 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2853175 ']' 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2853175 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2853175 ']' 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2853175 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853175 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853175' 00:08:50.491 killing process with pid 2853175 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2853175 00:08:50.491 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2853175 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.863 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.393 00:08:54.393 real 0m21.505s 00:08:54.393 user 1m12.240s 00:08:54.393 sys 0m5.388s 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.393 ************************************ 00:08:54.393 END TEST nvmf_lvol 00:08:54.393 ************************************ 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.393 ************************************ 00:08:54.393 START TEST nvmf_lvs_grow 00:08:54.393 ************************************ 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.393 * Looking for test storage... 00:08:54.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.393 --rc genhtml_branch_coverage=1 00:08:54.393 --rc genhtml_function_coverage=1 00:08:54.393 --rc genhtml_legend=1 00:08:54.393 --rc geninfo_all_blocks=1 00:08:54.393 --rc geninfo_unexecuted_blocks=1 00:08:54.393 00:08:54.393 ' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.393 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.394 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.296 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:56.297 00:08:56.297 --- 10.0.0.2 ping statistics --- 00:08:56.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.297 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:56.297 00:08:56.297 --- 10.0.0.1 ping statistics --- 00:08:56.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.297 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2857153 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2857153 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2857153 ']' 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.297 18:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.556 [2024-11-18 18:15:54.652773] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:56.556 [2024-11-18 18:15:54.652900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.556 [2024-11-18 18:15:54.800495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.814 [2024-11-18 18:15:54.936249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.814 [2024-11-18 18:15:54.936334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.814 [2024-11-18 18:15:54.936360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.814 [2024-11-18 18:15:54.936390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.814 [2024-11-18 18:15:54.936410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.814 [2024-11-18 18:15:54.938063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.380 18:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.946 [2024-11-18 18:15:55.980446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 ************************************ 00:08:57.946 START TEST lvs_grow_clean 00:08:57.946 ************************************ 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.946 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.204 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.204 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.462 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:08:58.462 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:08:58.462 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:58.720 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:58.721 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:58.721 18:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 lvol 150 00:08:58.979 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80082bab-7558-4d7f-b129-f99d02011412 00:08:58.979 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.979 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.237 [2024-11-18 18:15:57.491716] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.237 [2024-11-18 18:15:57.491841] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.237 true 00:08:59.237 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:08:59.237 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:59.494 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:59.494 18:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.059 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80082bab-7558-4d7f-b129-f99d02011412 00:09:00.318 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.576 [2024-11-18 18:15:58.687566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.576 18:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2857726 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2857726 /var/tmp/bdevperf.sock 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2857726 ']' 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.834 18:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:00.834 [2024-11-18 18:15:59.087746] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:00.834 [2024-11-18 18:15:59.087911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857726 ] 00:09:01.105 [2024-11-18 18:15:59.232431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.105 [2024-11-18 18:15:59.367572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.069 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.069 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:02.069 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.327 Nvme0n1 00:09:02.327 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:02.584 [ 00:09:02.584 { 00:09:02.584 "name": "Nvme0n1", 00:09:02.584 "aliases": [ 00:09:02.584 "80082bab-7558-4d7f-b129-f99d02011412" 00:09:02.584 ], 00:09:02.584 "product_name": "NVMe disk", 00:09:02.584 "block_size": 4096, 00:09:02.584 "num_blocks": 38912, 00:09:02.584 "uuid": "80082bab-7558-4d7f-b129-f99d02011412", 00:09:02.584 "numa_id": 0, 00:09:02.584 "assigned_rate_limits": { 00:09:02.584 "rw_ios_per_sec": 0, 00:09:02.584 "rw_mbytes_per_sec": 0, 00:09:02.584 "r_mbytes_per_sec": 0, 00:09:02.584 "w_mbytes_per_sec": 0 00:09:02.584 }, 00:09:02.584 "claimed": false, 00:09:02.584 "zoned": false, 00:09:02.584 "supported_io_types": { 00:09:02.584 "read": true, 00:09:02.584 "write": true, 00:09:02.584 "unmap": true, 00:09:02.584 "flush": true, 00:09:02.584 "reset": true, 00:09:02.584 "nvme_admin": true, 00:09:02.584 "nvme_io": true, 00:09:02.584 "nvme_io_md": false, 00:09:02.584 "write_zeroes": true, 00:09:02.584 "zcopy": false, 00:09:02.584 "get_zone_info": false, 00:09:02.584 "zone_management": false, 00:09:02.584 "zone_append": false, 00:09:02.584 "compare": true, 00:09:02.584 "compare_and_write": true, 00:09:02.584 "abort": true, 00:09:02.584 "seek_hole": false, 00:09:02.584 "seek_data": false, 00:09:02.584 "copy": true, 00:09:02.584 "nvme_iov_md": false 00:09:02.584 }, 00:09:02.584 "memory_domains": [ 00:09:02.584 { 00:09:02.584 "dma_device_id": "system", 00:09:02.584 "dma_device_type": 1 00:09:02.584 } 00:09:02.584 ], 00:09:02.584 "driver_specific": { 00:09:02.584 "nvme": [ 00:09:02.584 { 00:09:02.584 "trid": { 00:09:02.584 "trtype": "TCP", 00:09:02.584 "adrfam": "IPv4", 00:09:02.584 "traddr": "10.0.0.2", 00:09:02.584 "trsvcid": "4420", 00:09:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:02.584 }, 00:09:02.584 "ctrlr_data": { 00:09:02.584 "cntlid": 1, 00:09:02.584 "vendor_id": "0x8086", 00:09:02.584 "model_number": "SPDK bdev Controller", 00:09:02.584 "serial_number": "SPDK0", 00:09:02.584 "firmware_revision": "25.01", 00:09:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.584 "oacs": { 00:09:02.584 "security": 0, 00:09:02.584 "format": 0, 00:09:02.584 "firmware": 0, 00:09:02.584 "ns_manage": 0 00:09:02.584 }, 00:09:02.584 "multi_ctrlr": true, 00:09:02.584 "ana_reporting": false 00:09:02.584 }, 00:09:02.584 "vs": { 00:09:02.584 "nvme_version": "1.3" 00:09:02.584 }, 00:09:02.584 "ns_data": { 00:09:02.584 "id": 1, 00:09:02.584 "can_share": true 00:09:02.584 } 00:09:02.584 } 00:09:02.584 ], 00:09:02.584 "mp_policy": "active_passive" 00:09:02.584 } 00:09:02.584 } 00:09:02.584 ] 00:09:02.584 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2857945 00:09:02.584 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.584 18:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.841 Running I/O for 10 seconds... 00:09:03.773 Latency(us) 00:09:03.773 [2024-11-18T17:16:02.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.773 Nvme0n1 : 1.00 10703.00 41.81 0.00 0.00 0.00 0.00 0.00 00:09:03.773 [2024-11-18T17:16:02.110Z] =================================================================================================================== 00:09:03.773 [2024-11-18T17:16:02.110Z] Total : 10703.00 41.81 0.00 0.00 0.00 0.00 0.00 00:09:03.773 00:09:04.706 18:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:04.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.706 Nvme0n1 : 2.00 10876.00 42.48 0.00 0.00 0.00 0.00 0.00 00:09:04.707 [2024-11-18T17:16:03.044Z] =================================================================================================================== 00:09:04.707 [2024-11-18T17:16:03.044Z] Total : 10876.00 42.48 0.00 0.00 0.00 0.00 0.00 00:09:04.707 00:09:05.058 true 00:09:05.058 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:05.058 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.338 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.338 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.338 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2857945 00:09:05.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.905 Nvme0n1 : 3.00 10849.00 42.38 0.00 0.00 0.00 0.00 0.00 00:09:05.905 [2024-11-18T17:16:04.242Z] =================================================================================================================== 00:09:05.905 [2024-11-18T17:16:04.242Z] Total : 10849.00 42.38 0.00 0.00 0.00 0.00 0.00 00:09:05.905 00:09:06.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.838 Nvme0n1 : 4.00 10915.25 42.64 0.00 0.00 0.00 0.00 0.00 00:09:06.838 [2024-11-18T17:16:05.175Z] =================================================================================================================== 00:09:06.838 [2024-11-18T17:16:05.175Z] Total : 10915.25 42.64 0.00 0.00 0.00 0.00 0.00 00:09:06.838 00:09:07.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.772 Nvme0n1 : 5.00 10942.00 42.74 0.00 0.00 0.00 0.00 0.00 00:09:07.772 [2024-11-18T17:16:06.109Z] =================================================================================================================== 00:09:07.772 [2024-11-18T17:16:06.109Z] Total : 10942.00 42.74 0.00 0.00 0.00 0.00 0.00 00:09:07.772 00:09:08.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.707 Nvme0n1 : 6.00 10981.00 42.89 0.00 0.00 0.00 0.00 0.00 00:09:08.707 [2024-11-18T17:16:07.044Z] =================================================================================================================== 00:09:08.707 [2024-11-18T17:16:07.044Z] Total : 10981.00 42.89 0.00 0.00 0.00 0.00 0.00 00:09:08.707 00:09:09.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.642 Nvme0n1 : 7.00 11008.86 43.00 0.00 0.00 0.00 0.00 0.00 00:09:09.642 [2024-11-18T17:16:07.979Z] =================================================================================================================== 00:09:09.642 [2024-11-18T17:16:07.979Z] Total : 11008.86 43.00 0.00 0.00 0.00 0.00 0.00 00:09:09.642 00:09:11.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.016 Nvme0n1 : 8.00 11030.00 43.09 0.00 0.00 0.00 0.00 0.00 00:09:11.016 [2024-11-18T17:16:09.353Z] =================================================================================================================== 00:09:11.016 [2024-11-18T17:16:09.353Z] Total : 11030.00 43.09 0.00 0.00 0.00 0.00 0.00 00:09:11.016 00:09:11.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.950 Nvme0n1 : 9.00 11039.11 43.12 0.00 0.00 0.00 0.00 0.00 00:09:11.950 [2024-11-18T17:16:10.287Z] =================================================================================================================== 00:09:11.950 [2024-11-18T17:16:10.287Z] Total : 11039.11 43.12 0.00 0.00 0.00 0.00 0.00 00:09:11.950 00:09:12.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.883 Nvme0n1 : 10.00 11052.80 43.17 0.00 0.00 0.00 0.00 0.00 00:09:12.883 [2024-11-18T17:16:11.220Z] =================================================================================================================== 00:09:12.883 [2024-11-18T17:16:11.220Z] Total : 11052.80 43.17 0.00 0.00 0.00 0.00 0.00 00:09:12.883 00:09:12.883 00:09:12.883 Latency(us) 00:09:12.883 [2024-11-18T17:16:11.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.883 Nvme0n1 : 10.01 11054.46 43.18 0.00 0.00 11572.17 5801.15 22524.97 00:09:12.883 [2024-11-18T17:16:11.220Z] =================================================================================================================== 00:09:12.883 [2024-11-18T17:16:11.221Z] Total : 11054.46 43.18 0.00 0.00 11572.17 5801.15 22524.97 00:09:12.884 { 00:09:12.884 "results": [ 00:09:12.884 { 00:09:12.884 "job": "Nvme0n1", 00:09:12.884 "core_mask": "0x2", 00:09:12.884 "workload": "randwrite", 00:09:12.884 "status": "finished", 00:09:12.884 "queue_depth": 128, 00:09:12.884 "io_size": 4096, 00:09:12.884 "runtime": 10.01008, 00:09:12.884 "iops": 11054.457107235907, 00:09:12.884 "mibps": 43.18147307514026, 00:09:12.884 "io_failed": 0, 00:09:12.884 "io_timeout": 0, 00:09:12.884 "avg_latency_us": 11572.170960735171, 00:09:12.884 "min_latency_us": 5801.14962962963, 00:09:12.884 "max_latency_us": 22524.965925925924 00:09:12.884 } 00:09:12.884 ], 00:09:12.884 "core_count": 1 00:09:12.884 } 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2857726 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2857726 ']' 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2857726 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.884 18:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2857726 00:09:12.884 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:12.884 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:12.884 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2857726' 00:09:12.884 killing process with pid 2857726 00:09:12.884 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2857726 00:09:12.884 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.884 00:09:12.884 Latency(us) 00:09:12.884 [2024-11-18T17:16:11.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.884 [2024-11-18T17:16:11.221Z] =================================================================================================================== 00:09:12.884 [2024-11-18T17:16:11.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.884 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2857726 00:09:13.818 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.076 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.333 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:14.333 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.591 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.591 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:14.591 18:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.850 [2024-11-18 18:16:13.063952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.850 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:15.108 request: 00:09:15.108 { 00:09:15.108 "uuid": "2c6fa1b7-5d34-41cd-bca7-21b49d053e81", 00:09:15.108 "method": "bdev_lvol_get_lvstores", 00:09:15.108 "req_id": 1 00:09:15.108 } 00:09:15.108 Got JSON-RPC error response 00:09:15.108 response: 00:09:15.108 { 00:09:15.108 "code": -19, 00:09:15.108 "message": "No such device" 00:09:15.108 } 00:09:15.108 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:15.108 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.108 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.108 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.108 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.365 aio_bdev 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80082bab-7558-4d7f-b129-f99d02011412 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=80082bab-7558-4d7f-b129-f99d02011412 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.365 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.623 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80082bab-7558-4d7f-b129-f99d02011412 -t 2000 00:09:16.189 [ 00:09:16.189 { 00:09:16.189 "name": "80082bab-7558-4d7f-b129-f99d02011412", 00:09:16.189 "aliases": [ 00:09:16.189 "lvs/lvol" 00:09:16.189 ], 00:09:16.189 "product_name": "Logical Volume", 00:09:16.189 "block_size": 4096, 00:09:16.189 "num_blocks": 38912, 00:09:16.189 "uuid": "80082bab-7558-4d7f-b129-f99d02011412", 00:09:16.189 "assigned_rate_limits": { 00:09:16.189 "rw_ios_per_sec": 0, 00:09:16.189 "rw_mbytes_per_sec": 0, 00:09:16.189 "r_mbytes_per_sec": 0, 00:09:16.189 "w_mbytes_per_sec": 0 00:09:16.189 }, 00:09:16.189 "claimed": false, 00:09:16.189 "zoned": false, 00:09:16.189 "supported_io_types": { 00:09:16.189 "read": true, 00:09:16.189 "write": true, 00:09:16.189 "unmap": true, 00:09:16.189 "flush": false, 00:09:16.189 "reset": true, 00:09:16.189 "nvme_admin": false, 00:09:16.189 "nvme_io": false, 00:09:16.189 "nvme_io_md": false, 00:09:16.189 "write_zeroes": true, 00:09:16.189 "zcopy": false, 00:09:16.189 "get_zone_info": false, 00:09:16.189 "zone_management": false, 00:09:16.189 "zone_append": false, 00:09:16.189 "compare": false, 00:09:16.189 "compare_and_write": false, 00:09:16.189 "abort": false, 00:09:16.189 "seek_hole": true, 00:09:16.189 "seek_data": true, 00:09:16.189 "copy": false, 00:09:16.189 "nvme_iov_md": false 00:09:16.189 }, 00:09:16.189 "driver_specific": { 00:09:16.189 "lvol": { 00:09:16.189 "lvol_store_uuid": "2c6fa1b7-5d34-41cd-bca7-21b49d053e81", 00:09:16.189 "base_bdev": "aio_bdev", 00:09:16.189 "thin_provision": false, 00:09:16.189 "num_allocated_clusters": 38, 00:09:16.189 "snapshot": false, 00:09:16.189 "clone": false, 00:09:16.189 "esnap_clone": false 00:09:16.189 } 00:09:16.189 } 00:09:16.189 } 00:09:16.189 ] 00:09:16.189 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:16.189 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:16.189 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:16.447 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:16.447 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:16.447 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:16.705 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:16.705 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80082bab-7558-4d7f-b129-f99d02011412 00:09:16.964 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c6fa1b7-5d34-41cd-bca7-21b49d053e81 00:09:17.222 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.480 00:09:17.480 real 0m19.678s 00:09:17.480 user 0m19.484s 00:09:17.480 sys 0m1.951s 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:17.480 ************************************ 00:09:17.480 END TEST lvs_grow_clean 00:09:17.480 ************************************ 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.480 ************************************ 00:09:17.480 START TEST lvs_grow_dirty 00:09:17.480 ************************************ 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.480 18:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.045 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.046 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.303 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:18.303 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:18.303 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.560 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.560 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.561 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 lvol 150 00:09:18.818 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:18.818 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.818 18:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:19.076 [2024-11-18 18:16:17.192461] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:19.076 [2024-11-18 18:16:17.192572] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.076 true 00:09:19.076 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:19.076 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.333 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.333 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.590 18:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:19.848 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.107 [2024-11-18 18:16:18.272061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.107 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2860051 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2860051 /var/tmp/bdevperf.sock 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2860051 ']' 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.365 18:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.365 [2024-11-18 18:16:18.638904] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:20.365 [2024-11-18 18:16:18.639055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860051 ] 00:09:20.623 [2024-11-18 18:16:18.785778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.623 [2024-11-18 18:16:18.922765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.557 18:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.557 18:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:21.557 18:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.124 Nvme0n1 00:09:22.124 18:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.124 [ 00:09:22.124 { 00:09:22.124 "name": "Nvme0n1", 00:09:22.124 "aliases": [ 00:09:22.124 "4372afb0-3ecd-4b95-88a8-218081fca5d4" 00:09:22.124 ], 00:09:22.124 "product_name": "NVMe disk", 00:09:22.124 "block_size": 4096, 00:09:22.124 "num_blocks": 38912, 00:09:22.124 "uuid": "4372afb0-3ecd-4b95-88a8-218081fca5d4", 00:09:22.124 "numa_id": 0, 00:09:22.124 "assigned_rate_limits": { 00:09:22.124 "rw_ios_per_sec": 0, 00:09:22.124 "rw_mbytes_per_sec": 0, 00:09:22.124 "r_mbytes_per_sec": 0, 00:09:22.124 "w_mbytes_per_sec": 0 00:09:22.124 }, 00:09:22.124 "claimed": false, 00:09:22.124 "zoned": false, 00:09:22.124 "supported_io_types": { 00:09:22.124 "read": true, 00:09:22.124 "write": true, 00:09:22.124 "unmap": true, 00:09:22.124 "flush": true, 00:09:22.124 "reset": true, 00:09:22.124 "nvme_admin": true, 00:09:22.124 "nvme_io": true, 00:09:22.124 "nvme_io_md": false, 00:09:22.124 "write_zeroes": true, 00:09:22.124 "zcopy": false, 00:09:22.124 "get_zone_info": false, 00:09:22.124 "zone_management": false, 00:09:22.124 "zone_append": false, 00:09:22.124 "compare": true, 00:09:22.124 "compare_and_write": true, 00:09:22.124 "abort": true, 00:09:22.124 "seek_hole": false, 00:09:22.124 "seek_data": false, 00:09:22.124 "copy": true, 00:09:22.124 "nvme_iov_md": false 00:09:22.124 }, 00:09:22.124 "memory_domains": [ 00:09:22.124 { 00:09:22.124 "dma_device_id": "system", 00:09:22.124 "dma_device_type": 1 00:09:22.124 } 00:09:22.124 ], 00:09:22.124 "driver_specific": { 00:09:22.124 "nvme": [ 00:09:22.124 { 00:09:22.124 "trid": { 00:09:22.124 "trtype": "TCP", 00:09:22.124 "adrfam": "IPv4", 00:09:22.124 "traddr": "10.0.0.2", 00:09:22.124 "trsvcid": "4420", 00:09:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.124 }, 00:09:22.124 "ctrlr_data": { 00:09:22.124 "cntlid": 1, 00:09:22.124 "vendor_id": "0x8086", 00:09:22.124 "model_number": "SPDK bdev Controller", 00:09:22.124 "serial_number": "SPDK0", 00:09:22.124 "firmware_revision": "25.01", 00:09:22.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.124 "oacs": { 00:09:22.124 "security": 0, 00:09:22.124 "format": 0, 00:09:22.124 "firmware": 0, 00:09:22.124 "ns_manage": 0 00:09:22.124 }, 00:09:22.124 "multi_ctrlr": true, 00:09:22.124 "ana_reporting": false 00:09:22.124 }, 00:09:22.124 "vs": { 00:09:22.124 "nvme_version": "1.3" 00:09:22.124 }, 00:09:22.124 "ns_data": { 00:09:22.124 "id": 1, 00:09:22.124 "can_share": true 00:09:22.124 } 00:09:22.124 } 00:09:22.124 ], 00:09:22.124 "mp_policy": "active_passive" 00:09:22.124 } 00:09:22.124 } 00:09:22.124 ] 00:09:22.124 18:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2860321 00:09:22.124 18:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:22.124 18:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.382 Running I/O for 10 seconds... 00:09:23.315 Latency(us) 00:09:23.315 [2024-11-18T17:16:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.315 Nvme0n1 : 1.00 10734.00 41.93 0.00 0.00 0.00 0.00 0.00 00:09:23.315 [2024-11-18T17:16:21.652Z] =================================================================================================================== 00:09:23.315 [2024-11-18T17:16:21.652Z] Total : 10734.00 41.93 0.00 0.00 0.00 0.00 0.00 00:09:23.315 00:09:24.249 18:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:24.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.250 Nvme0n1 : 2.00 10764.50 42.05 0.00 0.00 0.00 0.00 0.00 00:09:24.250 [2024-11-18T17:16:22.587Z] =================================================================================================================== 00:09:24.250 [2024-11-18T17:16:22.587Z] Total : 10764.50 42.05 0.00 0.00 0.00 0.00 0.00 00:09:24.250 00:09:24.507 true 00:09:24.507 18:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:24.507 18:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:24.766 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:24.766 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:24.766 18:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2860321 00:09:25.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.332 Nvme0n1 : 3.00 10859.33 42.42 0.00 0.00 0.00 0.00 0.00 00:09:25.332 [2024-11-18T17:16:23.669Z] =================================================================================================================== 00:09:25.332 [2024-11-18T17:16:23.669Z] Total : 10859.33 42.42 0.00 0.00 0.00 0.00 0.00 00:09:25.332 00:09:26.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.266 Nvme0n1 : 4.00 10938.50 42.73 0.00 0.00 0.00 0.00 0.00 00:09:26.266 [2024-11-18T17:16:24.603Z] =================================================================================================================== 00:09:26.266 [2024-11-18T17:16:24.603Z] Total : 10938.50 42.73 0.00 0.00 0.00 0.00 0.00 00:09:26.266 00:09:27.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.641 Nvme0n1 : 5.00 10942.00 42.74 0.00 0.00 0.00 0.00 0.00 00:09:27.641 [2024-11-18T17:16:25.978Z] =================================================================================================================== 00:09:27.641 [2024-11-18T17:16:25.978Z] Total : 10942.00 42.74 0.00 0.00 0.00 0.00 0.00 00:09:27.641 00:09:28.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.575 Nvme0n1 : 6.00 10983.83 42.91 0.00 0.00 0.00 0.00 0.00 00:09:28.575 [2024-11-18T17:16:26.912Z] =================================================================================================================== 00:09:28.575 [2024-11-18T17:16:26.912Z] Total : 10983.83 42.91 0.00 0.00 0.00 0.00 0.00 00:09:28.575 00:09:29.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.509 Nvme0n1 : 7.00 11011.29 43.01 0.00 0.00 0.00 0.00 0.00 00:09:29.509 [2024-11-18T17:16:27.846Z] =================================================================================================================== 00:09:29.509 [2024-11-18T17:16:27.846Z] Total : 11011.29 43.01 0.00 0.00 0.00 0.00 0.00 00:09:29.509 00:09:30.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.443 Nvme0n1 : 8.00 11047.75 43.16 0.00 0.00 0.00 0.00 0.00 00:09:30.443 [2024-11-18T17:16:28.780Z] =================================================================================================================== 00:09:30.443 [2024-11-18T17:16:28.780Z] Total : 11047.75 43.16 0.00 0.00 0.00 0.00 0.00 00:09:30.443 00:09:31.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.378 Nvme0n1 : 9.00 11062.22 43.21 0.00 0.00 0.00 0.00 0.00 00:09:31.378 [2024-11-18T17:16:29.715Z] =================================================================================================================== 00:09:31.378 [2024-11-18T17:16:29.715Z] Total : 11062.22 43.21 0.00 0.00 0.00 0.00 0.00 00:09:31.378 00:09:32.367 00:09:32.367 Latency(us) 00:09:32.367 [2024-11-18T17:16:30.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.367 Nvme0n1 : 10.00 11069.36 43.24 0.00 0.00 11556.72 2718.53 22136.60 00:09:32.367 [2024-11-18T17:16:30.704Z] =================================================================================================================== 00:09:32.367 [2024-11-18T17:16:30.704Z] Total : 11069.36 43.24 0.00 0.00 11556.72 2718.53 22136.60 00:09:32.367 { 00:09:32.367 "results": [ 00:09:32.367 { 00:09:32.367 "job": "Nvme0n1", 00:09:32.367 "core_mask": "0x2", 00:09:32.367 "workload": "randwrite", 00:09:32.367 "status": "finished", 00:09:32.367 "queue_depth": 128, 00:09:32.367 "io_size": 4096, 00:09:32.367 "runtime": 10.004099, 00:09:32.367 "iops": 11069.362668242287, 00:09:32.367 "mibps": 43.239697922821435, 00:09:32.367 "io_failed": 0, 00:09:32.367 "io_timeout": 0, 00:09:32.367 "avg_latency_us": 11556.715460129306, 00:09:32.367 "min_latency_us": 2718.5303703703703, 00:09:32.367 "max_latency_us": 22136.604444444445 00:09:32.367 } 00:09:32.367 ], 00:09:32.367 "core_count": 1 00:09:32.367 } 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2860051 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2860051 ']' 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2860051 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860051 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860051' 00:09:32.367 killing process with pid 2860051 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2860051 00:09:32.367 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.367 00:09:32.367 Latency(us) 00:09:32.367 [2024-11-18T17:16:30.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.367 [2024-11-18T17:16:30.704Z] =================================================================================================================== 00:09:32.367 [2024-11-18T17:16:30.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.367 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2860051 00:09:33.301 18:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.558 18:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.816 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:33.816 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:34.074 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:34.074 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.074 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2857153 00:09:34.074 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2857153 00:09:34.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2857153 Killed "${NVMF_APP[@]}" "$@" 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2861669 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2861669 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2861669 ']' 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.332 18:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.332 [2024-11-18 18:16:32.519666] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.332 [2024-11-18 18:16:32.519826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.591 [2024-11-18 18:16:32.682747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.591 [2024-11-18 18:16:32.820788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.591 [2024-11-18 18:16:32.820882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.591 [2024-11-18 18:16:32.820907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.591 [2024-11-18 18:16:32.820932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.591 [2024-11-18 18:16:32.820952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.591 [2024-11-18 18:16:32.822599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.525 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.526 [2024-11-18 18:16:33.816298] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.526 [2024-11-18 18:16:33.816529] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.526 [2024-11-18 18:16:33.816621] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.526 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.092 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4372afb0-3ecd-4b95-88a8-218081fca5d4 -t 2000 00:09:36.092 [ 00:09:36.092 { 00:09:36.092 "name": "4372afb0-3ecd-4b95-88a8-218081fca5d4", 00:09:36.092 "aliases": [ 00:09:36.092 "lvs/lvol" 00:09:36.092 ], 00:09:36.092 "product_name": "Logical Volume", 00:09:36.092 "block_size": 4096, 00:09:36.092 "num_blocks": 38912, 00:09:36.092 "uuid": "4372afb0-3ecd-4b95-88a8-218081fca5d4", 00:09:36.092 "assigned_rate_limits": { 00:09:36.092 "rw_ios_per_sec": 0, 00:09:36.092 "rw_mbytes_per_sec": 0, 00:09:36.092 "r_mbytes_per_sec": 0, 00:09:36.092 "w_mbytes_per_sec": 0 00:09:36.092 }, 00:09:36.092 "claimed": false, 00:09:36.092 "zoned": false, 00:09:36.092 "supported_io_types": { 00:09:36.092 "read": true, 00:09:36.092 "write": true, 00:09:36.092 "unmap": true, 00:09:36.092 "flush": false, 00:09:36.092 "reset": true, 00:09:36.092 "nvme_admin": false, 00:09:36.092 "nvme_io": false, 00:09:36.092 "nvme_io_md": false, 00:09:36.092 "write_zeroes": true, 00:09:36.092 "zcopy": false, 00:09:36.092 "get_zone_info": false, 00:09:36.092 "zone_management": false, 00:09:36.092 "zone_append": false, 00:09:36.092 "compare": false, 00:09:36.092 "compare_and_write": false, 00:09:36.092 "abort": false, 00:09:36.092 "seek_hole": true, 00:09:36.092 "seek_data": true, 00:09:36.092 "copy": false, 00:09:36.092 "nvme_iov_md": false 00:09:36.092 }, 00:09:36.092 "driver_specific": { 00:09:36.092 "lvol": { 00:09:36.092 "lvol_store_uuid": "20a2bf19-c4cf-4bb7-9e73-9221c5b2b746", 00:09:36.092 "base_bdev": "aio_bdev", 00:09:36.092 "thin_provision": false, 00:09:36.092 "num_allocated_clusters": 38, 00:09:36.092 "snapshot": false, 00:09:36.092 "clone": false, 00:09:36.092 "esnap_clone": false 00:09:36.092 } 00:09:36.092 } 00:09:36.092 } 00:09:36.092 ] 00:09:36.351 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:36.351 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:36.351 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:36.609 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:36.609 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:36.609 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:36.867 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:36.867 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.125 [2024-11-18 18:16:35.269323] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:37.125 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:37.383 request: 00:09:37.383 { 00:09:37.383 "uuid": "20a2bf19-c4cf-4bb7-9e73-9221c5b2b746", 00:09:37.383 "method": "bdev_lvol_get_lvstores", 00:09:37.383 "req_id": 1 00:09:37.383 } 00:09:37.383 Got JSON-RPC error response 00:09:37.383 response: 00:09:37.383 { 00:09:37.383 "code": -19, 00:09:37.383 "message": "No such device" 00:09:37.383 } 00:09:37.383 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:37.383 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.383 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.383 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.383 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.641 aio_bdev 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.641 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.899 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4372afb0-3ecd-4b95-88a8-218081fca5d4 -t 2000 00:09:38.157 [ 00:09:38.157 { 00:09:38.157 "name": "4372afb0-3ecd-4b95-88a8-218081fca5d4", 00:09:38.157 "aliases": [ 00:09:38.157 "lvs/lvol" 00:09:38.157 ], 00:09:38.157 "product_name": "Logical Volume", 00:09:38.157 "block_size": 4096, 00:09:38.157 "num_blocks": 38912, 00:09:38.157 "uuid": "4372afb0-3ecd-4b95-88a8-218081fca5d4", 00:09:38.157 "assigned_rate_limits": { 00:09:38.157 "rw_ios_per_sec": 0, 00:09:38.157 "rw_mbytes_per_sec": 0, 00:09:38.157 "r_mbytes_per_sec": 0, 00:09:38.157 "w_mbytes_per_sec": 0 00:09:38.157 }, 00:09:38.157 "claimed": false, 00:09:38.157 "zoned": false, 00:09:38.157 "supported_io_types": { 00:09:38.157 "read": true, 00:09:38.157 "write": true, 00:09:38.157 "unmap": true, 00:09:38.157 "flush": false, 00:09:38.157 "reset": true, 00:09:38.157 "nvme_admin": false, 00:09:38.157 "nvme_io": false, 00:09:38.157 "nvme_io_md": false, 00:09:38.157 "write_zeroes": true, 00:09:38.157 "zcopy": false, 00:09:38.157 "get_zone_info": false, 00:09:38.157 "zone_management": false, 00:09:38.157 "zone_append": false, 00:09:38.157 "compare": false, 00:09:38.157 "compare_and_write": false, 00:09:38.157 "abort": false, 00:09:38.157 "seek_hole": true, 00:09:38.157 "seek_data": true, 00:09:38.157 "copy": false, 00:09:38.157 "nvme_iov_md": false 00:09:38.157 }, 00:09:38.157 "driver_specific": { 00:09:38.157 "lvol": { 00:09:38.158 "lvol_store_uuid": "20a2bf19-c4cf-4bb7-9e73-9221c5b2b746", 00:09:38.158 "base_bdev": "aio_bdev", 00:09:38.158 "thin_provision": false, 00:09:38.158 "num_allocated_clusters": 38, 00:09:38.158 "snapshot": false, 00:09:38.158 "clone": false, 00:09:38.158 "esnap_clone": false 00:09:38.158 } 00:09:38.158 } 00:09:38.158 } 00:09:38.158 ] 00:09:38.158 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:38.158 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:38.158 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:38.415 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:38.415 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:38.415 18:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:38.673 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:38.673 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4372afb0-3ecd-4b95-88a8-218081fca5d4 00:09:39.239 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20a2bf19-c4cf-4bb7-9e73-9221c5b2b746 00:09:39.497 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.754 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.754 00:09:39.754 real 0m22.126s 00:09:39.754 user 0m56.094s 00:09:39.754 sys 0m4.626s 00:09:39.754 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 ************************************ 00:09:39.755 END TEST lvs_grow_dirty 00:09:39.755 ************************************ 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:39.755 nvmf_trace.0 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.755 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.755 rmmod nvme_tcp 00:09:39.755 rmmod nvme_fabrics 00:09:39.755 rmmod nvme_keyring 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2861669 ']' 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2861669 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2861669 ']' 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2861669 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861669 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861669' 00:09:39.755 killing process with pid 2861669 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2861669 00:09:39.755 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2861669 00:09:41.126 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.126 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.126 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.126 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.127 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.027 00:09:43.027 real 0m48.995s 00:09:43.027 user 1m23.646s 00:09:43.027 sys 0m8.733s 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.027 ************************************ 00:09:43.027 END TEST nvmf_lvs_grow 00:09:43.027 ************************************ 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.027 ************************************ 00:09:43.027 START TEST nvmf_bdev_io_wait 00:09:43.027 ************************************ 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.027 * Looking for test storage... 00:09:43.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.027 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.286 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.287 --rc genhtml_branch_coverage=1 00:09:43.287 --rc genhtml_function_coverage=1 00:09:43.287 --rc genhtml_legend=1 00:09:43.287 --rc geninfo_all_blocks=1 00:09:43.287 --rc geninfo_unexecuted_blocks=1 00:09:43.287 00:09:43.287 ' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.287 --rc genhtml_branch_coverage=1 00:09:43.287 --rc genhtml_function_coverage=1 00:09:43.287 --rc genhtml_legend=1 00:09:43.287 --rc geninfo_all_blocks=1 00:09:43.287 --rc geninfo_unexecuted_blocks=1 00:09:43.287 00:09:43.287 ' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.287 --rc genhtml_branch_coverage=1 00:09:43.287 --rc genhtml_function_coverage=1 00:09:43.287 --rc genhtml_legend=1 00:09:43.287 --rc geninfo_all_blocks=1 00:09:43.287 --rc geninfo_unexecuted_blocks=1 00:09:43.287 00:09:43.287 ' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.287 --rc genhtml_branch_coverage=1 00:09:43.287 --rc genhtml_function_coverage=1 00:09:43.287 --rc genhtml_legend=1 00:09:43.287 --rc geninfo_all_blocks=1 00:09:43.287 --rc geninfo_unexecuted_blocks=1 00:09:43.287 00:09:43.287 ' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.287 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.189 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.189 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.189 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.190 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:09:45.448 00:09:45.448 --- 10.0.0.2 ping statistics --- 00:09:45.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.448 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:45.448 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:09:45.448 00:09:45.448 --- 10.0.0.1 ping statistics --- 00:09:45.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.448 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2864467 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2864467 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2864467 ']' 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.449 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.449 [2024-11-18 18:16:43.728745] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:45.449 [2024-11-18 18:16:43.728882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.706 [2024-11-18 18:16:43.887082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.706 [2024-11-18 18:16:44.029704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.706 [2024-11-18 18:16:44.029793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.706 [2024-11-18 18:16:44.029820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.706 [2024-11-18 18:16:44.029845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.706 [2024-11-18 18:16:44.029866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.706 [2024-11-18 18:16:44.032691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.706 [2024-11-18 18:16:44.032827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.707 [2024-11-18 18:16:44.032872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.707 [2024-11-18 18:16:44.032880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.638 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.638 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:46.638 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.639 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 [2024-11-18 18:16:45.011406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 Malloc0 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.897 [2024-11-18 18:16:45.115577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2864746 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2864748 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.897 { 00:09:46.897 "params": { 00:09:46.897 "name": "Nvme$subsystem", 00:09:46.897 "trtype": "$TEST_TRANSPORT", 00:09:46.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.897 "adrfam": "ipv4", 00:09:46.897 "trsvcid": "$NVMF_PORT", 00:09:46.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.897 "hdgst": ${hdgst:-false}, 00:09:46.897 "ddgst": ${ddgst:-false} 00:09:46.897 }, 00:09:46.897 "method": "bdev_nvme_attach_controller" 00:09:46.897 } 00:09:46.897 EOF 00:09:46.897 )") 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2864750 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.897 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.897 { 00:09:46.897 "params": { 00:09:46.897 "name": "Nvme$subsystem", 00:09:46.897 "trtype": "$TEST_TRANSPORT", 00:09:46.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.897 "adrfam": "ipv4", 00:09:46.897 "trsvcid": "$NVMF_PORT", 00:09:46.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.898 "hdgst": ${hdgst:-false}, 00:09:46.898 "ddgst": ${ddgst:-false} 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 } 00:09:46.898 EOF 00:09:46.898 )") 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2864753 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.898 { 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme$subsystem", 00:09:46.898 "trtype": "$TEST_TRANSPORT", 00:09:46.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "$NVMF_PORT", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.898 "hdgst": ${hdgst:-false}, 00:09:46.898 "ddgst": ${ddgst:-false} 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 } 00:09:46.898 EOF 00:09:46.898 )") 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.898 { 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme$subsystem", 00:09:46.898 "trtype": "$TEST_TRANSPORT", 00:09:46.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "$NVMF_PORT", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.898 "hdgst": ${hdgst:-false}, 00:09:46.898 "ddgst": ${ddgst:-false} 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 } 00:09:46.898 EOF 00:09:46.898 )") 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2864746 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme1", 00:09:46.898 "trtype": "tcp", 00:09:46.898 "traddr": "10.0.0.2", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "4420", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.898 "hdgst": false, 00:09:46.898 "ddgst": false 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 }' 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme1", 00:09:46.898 "trtype": "tcp", 00:09:46.898 "traddr": "10.0.0.2", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "4420", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.898 "hdgst": false, 00:09:46.898 "ddgst": false 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 }' 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme1", 00:09:46.898 "trtype": "tcp", 00:09:46.898 "traddr": "10.0.0.2", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "4420", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.898 "hdgst": false, 00:09:46.898 "ddgst": false 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 }' 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:46.898 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.898 "params": { 00:09:46.898 "name": "Nvme1", 00:09:46.898 "trtype": "tcp", 00:09:46.898 "traddr": "10.0.0.2", 00:09:46.898 "adrfam": "ipv4", 00:09:46.898 "trsvcid": "4420", 00:09:46.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.898 "hdgst": false, 00:09:46.898 "ddgst": false 00:09:46.898 }, 00:09:46.898 "method": "bdev_nvme_attach_controller" 00:09:46.898 }' 00:09:46.898 [2024-11-18 18:16:45.204237] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:46.898 [2024-11-18 18:16:45.204237] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:46.898 [2024-11-18 18:16:45.204237] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:46.898 [2024-11-18 18:16:45.204237] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:46.898 [2024-11-18 18:16:45.204393] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 18:16:45.204393] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 18:16:45.204393] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 18:16:45.204395] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:46.898 --proc-type=auto ] 00:09:46.898 --proc-type=auto ] 00:09:46.898 --proc-type=auto ] 00:09:47.156 [2024-11-18 18:16:45.449399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.414 [2024-11-18 18:16:45.552089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.414 [2024-11-18 18:16:45.571939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:47.414 [2024-11-18 18:16:45.650131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.414 [2024-11-18 18:16:45.673046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:47.673 [2024-11-18 18:16:45.753190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.673 [2024-11-18 18:16:45.772911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:47.673 [2024-11-18 18:16:45.876930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:47.673 Running I/O for 1 seconds... 00:09:47.931 Running I/O for 1 seconds... 00:09:48.190 Running I/O for 1 seconds... 00:09:48.190 Running I/O for 1 seconds... 00:09:48.755 142688.00 IOPS, 557.38 MiB/s 00:09:48.755 Latency(us) 00:09:48.755 [2024-11-18T17:16:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.755 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:48.755 Nvme1n1 : 1.00 142385.46 556.19 0.00 0.00 894.32 394.43 2075.31 00:09:48.755 [2024-11-18T17:16:47.092Z] =================================================================================================================== 00:09:48.755 [2024-11-18T17:16:47.092Z] Total : 142385.46 556.19 0.00 0.00 894.32 394.43 2075.31 00:09:49.013 8031.00 IOPS, 31.37 MiB/s 00:09:49.013 Latency(us) 00:09:49.013 [2024-11-18T17:16:47.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.013 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:49.013 Nvme1n1 : 1.01 8085.95 31.59 0.00 0.00 15745.18 4223.43 22816.24 00:09:49.013 [2024-11-18T17:16:47.350Z] =================================================================================================================== 00:09:49.013 [2024-11-18T17:16:47.351Z] Total : 8085.95 31.59 0.00 0.00 15745.18 4223.43 22816.24 00:09:49.014 6139.00 IOPS, 23.98 MiB/s 00:09:49.014 Latency(us) 00:09:49.014 [2024-11-18T17:16:47.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.014 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:49.014 Nvme1n1 : 1.01 6185.46 24.16 0.00 0.00 20549.06 10825.58 31651.46 00:09:49.014 [2024-11-18T17:16:47.351Z] =================================================================================================================== 00:09:49.014 [2024-11-18T17:16:47.351Z] Total : 6185.46 24.16 0.00 0.00 20549.06 10825.58 31651.46 00:09:49.271 7066.00 IOPS, 27.60 MiB/s 00:09:49.271 Latency(us) 00:09:49.271 [2024-11-18T17:16:47.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:49.271 Nvme1n1 : 1.01 7130.32 27.85 0.00 0.00 17853.49 7912.87 31845.64 00:09:49.271 [2024-11-18T17:16:47.608Z] =================================================================================================================== 00:09:49.271 [2024-11-18T17:16:47.608Z] Total : 7130.32 27.85 0.00 0.00 17853.49 7912.87 31845.64 00:09:49.529 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2864748 00:09:49.787 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2864750 00:09:49.787 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2864753 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.787 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.788 rmmod nvme_tcp 00:09:49.788 rmmod nvme_fabrics 00:09:49.788 rmmod nvme_keyring 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2864467 ']' 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2864467 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2864467 ']' 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2864467 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864467 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864467' 00:09:49.788 killing process with pid 2864467 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2864467 00:09:49.788 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2864467 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.161 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.061 00:09:53.061 real 0m9.903s 00:09:53.061 user 0m27.976s 00:09:53.061 sys 0m4.251s 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.061 ************************************ 00:09:53.061 END TEST nvmf_bdev_io_wait 00:09:53.061 ************************************ 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.061 ************************************ 00:09:53.061 START TEST nvmf_queue_depth 00:09:53.061 ************************************ 00:09:53.061 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.062 * Looking for test storage... 00:09:53.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.062 --rc genhtml_branch_coverage=1 00:09:53.062 --rc genhtml_function_coverage=1 00:09:53.062 --rc genhtml_legend=1 00:09:53.062 --rc geninfo_all_blocks=1 00:09:53.062 --rc geninfo_unexecuted_blocks=1 00:09:53.062 00:09:53.062 ' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.062 --rc genhtml_branch_coverage=1 00:09:53.062 --rc genhtml_function_coverage=1 00:09:53.062 --rc genhtml_legend=1 00:09:53.062 --rc geninfo_all_blocks=1 00:09:53.062 --rc geninfo_unexecuted_blocks=1 00:09:53.062 00:09:53.062 ' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.062 --rc genhtml_branch_coverage=1 00:09:53.062 --rc genhtml_function_coverage=1 00:09:53.062 --rc genhtml_legend=1 00:09:53.062 --rc geninfo_all_blocks=1 00:09:53.062 --rc geninfo_unexecuted_blocks=1 00:09:53.062 00:09:53.062 ' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.062 --rc genhtml_branch_coverage=1 00:09:53.062 --rc genhtml_function_coverage=1 00:09:53.062 --rc genhtml_legend=1 00:09:53.062 --rc geninfo_all_blocks=1 00:09:53.062 --rc geninfo_unexecuted_blocks=1 00:09:53.062 00:09:53.062 ' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.062 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.063 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.590 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:55.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:55.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:55.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:55.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:09:55.591 00:09:55.591 --- 10.0.0.2 ping statistics --- 00:09:55.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.591 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:09:55.591 00:09:55.591 --- 10.0.0.1 ping statistics --- 00:09:55.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.591 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.591 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2867125 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2867125 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2867125 ']' 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:55.592 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.592 [2024-11-18 18:16:53.729341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:55.592 [2024-11-18 18:16:53.729506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.592 [2024-11-18 18:16:53.885751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.849 [2024-11-18 18:16:54.024183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.850 [2024-11-18 18:16:54.024269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.850 [2024-11-18 18:16:54.024295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.850 [2024-11-18 18:16:54.024321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.850 [2024-11-18 18:16:54.024342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.850 [2024-11-18 18:16:54.025993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.414 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.414 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:56.414 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.414 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.414 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.672 [2024-11-18 18:16:54.756149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.672 Malloc0 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.672 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.673 [2024-11-18 18:16:54.880177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2867277 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2867277 /var/tmp/bdevperf.sock 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2867277 ']' 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.673 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.673 [2024-11-18 18:16:54.975816] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:56.673 [2024-11-18 18:16:54.975969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867277 ] 00:09:56.931 [2024-11-18 18:16:55.134995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.931 [2024-11-18 18:16:55.261526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.864 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.864 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:57.864 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:57.864 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.864 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.864 NVMe0n1 00:09:57.864 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.864 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.121 Running I/O for 10 seconds... 00:10:00.049 5996.00 IOPS, 23.42 MiB/s [2024-11-18T17:16:59.759Z] 6116.50 IOPS, 23.89 MiB/s [2024-11-18T17:17:00.692Z] 6140.00 IOPS, 23.98 MiB/s [2024-11-18T17:17:01.626Z] 6125.25 IOPS, 23.93 MiB/s [2024-11-18T17:17:02.560Z] 6139.80 IOPS, 23.98 MiB/s [2024-11-18T17:17:03.494Z] 6140.50 IOPS, 23.99 MiB/s [2024-11-18T17:17:04.428Z] 6129.71 IOPS, 23.94 MiB/s [2024-11-18T17:17:05.361Z] 6099.88 IOPS, 23.83 MiB/s [2024-11-18T17:17:06.736Z] 6077.78 IOPS, 23.74 MiB/s [2024-11-18T17:17:06.736Z] 6093.10 IOPS, 23.80 MiB/s 00:10:08.399 Latency(us) 00:10:08.399 [2024-11-18T17:17:06.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.399 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:08.399 Verification LBA range: start 0x0 length 0x4000 00:10:08.399 NVMe0n1 : 10.11 6117.31 23.90 0.00 0.00 166207.33 27185.30 101750.71 00:10:08.399 [2024-11-18T17:17:06.736Z] =================================================================================================================== 00:10:08.399 [2024-11-18T17:17:06.736Z] Total : 6117.31 23.90 0.00 0.00 166207.33 27185.30 101750.71 00:10:08.399 { 00:10:08.399 "results": [ 00:10:08.399 { 00:10:08.399 "job": "NVMe0n1", 00:10:08.399 "core_mask": "0x1", 00:10:08.399 "workload": "verify", 00:10:08.399 "status": "finished", 00:10:08.399 "verify_range": { 00:10:08.399 "start": 0, 00:10:08.399 "length": 16384 00:10:08.399 }, 00:10:08.399 "queue_depth": 1024, 00:10:08.399 "io_size": 4096, 00:10:08.399 "runtime": 10.112777, 00:10:08.399 "iops": 6117.310803946334, 00:10:08.399 "mibps": 23.895745327915368, 00:10:08.399 "io_failed": 0, 00:10:08.399 "io_timeout": 0, 00:10:08.399 "avg_latency_us": 166207.32777471846, 00:10:08.399 "min_latency_us": 27185.303703703703, 00:10:08.399 "max_latency_us": 101750.70814814814 00:10:08.399 } 00:10:08.399 ], 00:10:08.399 "core_count": 1 00:10:08.399 } 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2867277 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2867277 ']' 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2867277 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867277 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867277' 00:10:08.399 killing process with pid 2867277 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2867277 00:10:08.399 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.399 00:10:08.399 Latency(us) 00:10:08.399 [2024-11-18T17:17:06.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.399 [2024-11-18T17:17:06.736Z] =================================================================================================================== 00:10:08.399 [2024-11-18T17:17:06.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.399 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2867277 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.333 rmmod nvme_tcp 00:10:09.333 rmmod nvme_fabrics 00:10:09.333 rmmod nvme_keyring 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:09.333 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2867125 ']' 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2867125 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2867125 ']' 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2867125 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867125 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867125' 00:10:09.334 killing process with pid 2867125 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2867125 00:10:09.334 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2867125 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.708 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.615 00:10:12.615 real 0m19.669s 00:10:12.615 user 0m28.058s 00:10:12.615 sys 0m3.363s 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.615 ************************************ 00:10:12.615 END TEST nvmf_queue_depth 00:10:12.615 ************************************ 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.615 ************************************ 00:10:12.615 START TEST nvmf_target_multipath 00:10:12.615 ************************************ 00:10:12.615 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.874 * Looking for test storage... 00:10:12.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.874 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.874 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.874 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.874 --rc genhtml_branch_coverage=1 00:10:12.874 --rc genhtml_function_coverage=1 00:10:12.874 --rc genhtml_legend=1 00:10:12.874 --rc geninfo_all_blocks=1 00:10:12.874 --rc geninfo_unexecuted_blocks=1 00:10:12.874 00:10:12.874 ' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.874 --rc genhtml_branch_coverage=1 00:10:12.874 --rc genhtml_function_coverage=1 00:10:12.874 --rc genhtml_legend=1 00:10:12.874 --rc geninfo_all_blocks=1 00:10:12.874 --rc geninfo_unexecuted_blocks=1 00:10:12.874 00:10:12.874 ' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.874 --rc genhtml_branch_coverage=1 00:10:12.874 --rc genhtml_function_coverage=1 00:10:12.874 --rc genhtml_legend=1 00:10:12.874 --rc geninfo_all_blocks=1 00:10:12.874 --rc geninfo_unexecuted_blocks=1 00:10:12.874 00:10:12.874 ' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.874 --rc genhtml_branch_coverage=1 00:10:12.874 --rc genhtml_function_coverage=1 00:10:12.874 --rc genhtml_legend=1 00:10:12.874 --rc geninfo_all_blocks=1 00:10:12.874 --rc geninfo_unexecuted_blocks=1 00:10:12.874 00:10:12.874 ' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.874 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.875 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:15.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.408 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:15.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:15.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:15.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:10:15.409 00:10:15.409 --- 10.0.0.2 ping statistics --- 00:10:15.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.409 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:10:15.409 00:10:15.409 --- 10.0.0.1 ping statistics --- 00:10:15.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.409 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:15.409 only one NIC for nvmf test 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.409 rmmod nvme_tcp 00:10:15.409 rmmod nvme_fabrics 00:10:15.409 rmmod nvme_keyring 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.409 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.311 00:10:17.311 real 0m4.557s 00:10:17.311 user 0m0.912s 00:10:17.311 sys 0m1.587s 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.311 ************************************ 00:10:17.311 END TEST nvmf_target_multipath 00:10:17.311 ************************************ 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.311 ************************************ 00:10:17.311 START TEST nvmf_zcopy 00:10:17.311 ************************************ 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:17.311 * Looking for test storage... 00:10:17.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.311 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:17.570 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.571 --rc genhtml_branch_coverage=1 00:10:17.571 --rc genhtml_function_coverage=1 00:10:17.571 --rc genhtml_legend=1 00:10:17.571 --rc geninfo_all_blocks=1 00:10:17.571 --rc geninfo_unexecuted_blocks=1 00:10:17.571 00:10:17.571 ' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.571 --rc genhtml_branch_coverage=1 00:10:17.571 --rc genhtml_function_coverage=1 00:10:17.571 --rc genhtml_legend=1 00:10:17.571 --rc geninfo_all_blocks=1 00:10:17.571 --rc geninfo_unexecuted_blocks=1 00:10:17.571 00:10:17.571 ' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.571 --rc genhtml_branch_coverage=1 00:10:17.571 --rc genhtml_function_coverage=1 00:10:17.571 --rc genhtml_legend=1 00:10:17.571 --rc geninfo_all_blocks=1 00:10:17.571 --rc geninfo_unexecuted_blocks=1 00:10:17.571 00:10:17.571 ' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.571 --rc genhtml_branch_coverage=1 00:10:17.571 --rc genhtml_function_coverage=1 00:10:17.571 --rc genhtml_legend=1 00:10:17.571 --rc geninfo_all_blocks=1 00:10:17.571 --rc geninfo_unexecuted_blocks=1 00:10:17.571 00:10:17.571 ' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.571 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:19.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:19.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:19.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:19.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.470 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.471 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:10:19.729 00:10:19.729 --- 10.0.0.2 ping statistics --- 00:10:19.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.729 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:19.729 00:10:19.729 --- 10.0.0.1 ping statistics --- 00:10:19.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.729 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2872888 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2872888 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2872888 ']' 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.729 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 [2024-11-18 18:17:17.945353] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:19.729 [2024-11-18 18:17:17.945513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.987 [2024-11-18 18:17:18.099361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.987 [2024-11-18 18:17:18.239378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.987 [2024-11-18 18:17:18.239482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.987 [2024-11-18 18:17:18.239509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.987 [2024-11-18 18:17:18.239535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.987 [2024-11-18 18:17:18.239555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.987 [2024-11-18 18:17:18.241243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 [2024-11-18 18:17:18.947233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 [2024-11-18 18:17:18.963439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 malloc0 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:20.920 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:20.920 { 00:10:20.920 "params": { 00:10:20.920 "name": "Nvme$subsystem", 00:10:20.920 "trtype": "$TEST_TRANSPORT", 00:10:20.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.920 "adrfam": "ipv4", 00:10:20.920 "trsvcid": "$NVMF_PORT", 00:10:20.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.921 "hdgst": ${hdgst:-false}, 00:10:20.921 "ddgst": ${ddgst:-false} 00:10:20.921 }, 00:10:20.921 "method": "bdev_nvme_attach_controller" 00:10:20.921 } 00:10:20.921 EOF 00:10:20.921 )") 00:10:20.921 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:20.921 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:20.921 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:20.921 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.921 "params": { 00:10:20.921 "name": "Nvme1", 00:10:20.921 "trtype": "tcp", 00:10:20.921 "traddr": "10.0.0.2", 00:10:20.921 "adrfam": "ipv4", 00:10:20.921 "trsvcid": "4420", 00:10:20.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.921 "hdgst": false, 00:10:20.921 "ddgst": false 00:10:20.921 }, 00:10:20.921 "method": "bdev_nvme_attach_controller" 00:10:20.921 }' 00:10:20.921 [2024-11-18 18:17:19.106551] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:20.921 [2024-11-18 18:17:19.106713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873047 ] 00:10:20.921 [2024-11-18 18:17:19.247193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.177 [2024-11-18 18:17:19.388974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.740 Running I/O for 10 seconds... 00:10:23.606 4050.00 IOPS, 31.64 MiB/s [2024-11-18T17:17:23.318Z] 4148.00 IOPS, 32.41 MiB/s [2024-11-18T17:17:24.251Z] 4150.67 IOPS, 32.43 MiB/s [2024-11-18T17:17:25.186Z] 4165.25 IOPS, 32.54 MiB/s [2024-11-18T17:17:26.130Z] 4174.00 IOPS, 32.61 MiB/s [2024-11-18T17:17:27.112Z] 4188.83 IOPS, 32.73 MiB/s [2024-11-18T17:17:28.045Z] 4183.29 IOPS, 32.68 MiB/s [2024-11-18T17:17:28.979Z] 4185.88 IOPS, 32.70 MiB/s [2024-11-18T17:17:30.352Z] 4188.11 IOPS, 32.72 MiB/s [2024-11-18T17:17:30.352Z] 4189.80 IOPS, 32.73 MiB/s 00:10:32.015 Latency(us) 00:10:32.015 [2024-11-18T17:17:30.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.015 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:32.015 Verification LBA range: start 0x0 length 0x1000 00:10:32.015 Nvme1n1 : 10.02 4194.57 32.77 0.00 0.00 30433.15 3021.94 40583.77 00:10:32.015 [2024-11-18T17:17:30.352Z] =================================================================================================================== 00:10:32.015 [2024-11-18T17:17:30.352Z] Total : 4194.57 32.77 0.00 0.00 30433.15 3021.94 40583.77 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2874382 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:32.581 { 00:10:32.581 "params": { 00:10:32.581 "name": "Nvme$subsystem", 00:10:32.581 "trtype": "$TEST_TRANSPORT", 00:10:32.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.581 "adrfam": "ipv4", 00:10:32.581 "trsvcid": "$NVMF_PORT", 00:10:32.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.581 "hdgst": ${hdgst:-false}, 00:10:32.581 "ddgst": ${ddgst:-false} 00:10:32.581 }, 00:10:32.581 "method": "bdev_nvme_attach_controller" 00:10:32.581 } 00:10:32.581 EOF 00:10:32.581 )") 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:32.581 [2024-11-18 18:17:30.854568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.581 [2024-11-18 18:17:30.854656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:32.581 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.581 "params": { 00:10:32.581 "name": "Nvme1", 00:10:32.581 "trtype": "tcp", 00:10:32.581 "traddr": "10.0.0.2", 00:10:32.581 "adrfam": "ipv4", 00:10:32.581 "trsvcid": "4420", 00:10:32.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.582 "hdgst": false, 00:10:32.582 "ddgst": false 00:10:32.582 }, 00:10:32.582 "method": "bdev_nvme_attach_controller" 00:10:32.582 }' 00:10:32.582 [2024-11-18 18:17:30.862494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.862531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.870478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.870509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.878508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.878539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.886548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.886581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.894557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.894614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.902565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.902615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.582 [2024-11-18 18:17:30.910580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.582 [2024-11-18 18:17:30.910630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.918615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.918643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.926674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.926704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.934662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.934689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.935408] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:32.840 [2024-11-18 18:17:30.935520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874382 ] 00:10:32.840 [2024-11-18 18:17:30.942712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.942741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.950722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.950750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.958744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.958772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.966794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.966826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.974803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.974832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.982822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.982851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.990844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.990874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:30.998840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:30.998868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:31.006875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:31.006917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:31.014926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:31.014970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:31.022930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:31.022957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:31.030981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:31.031015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-18 18:17:31.038991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-18 18:17:31.039024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.047005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.047037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.055047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.055080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.063065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.063098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.071097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.071130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.079129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.079162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.081024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.841 [2024-11-18 18:17:31.087141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.087175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.095177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.095217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.103270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.103328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.111193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.111227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.119230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.119264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.127234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.127267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.135297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.135330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.143299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.143332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.151304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.151337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.159358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.159392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.167372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.167406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-18 18:17:31.175405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-18 18:17:31.175439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.183445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.183479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.191424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.191457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.199463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.199497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.207484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.207527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.215489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.215523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.218008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.099 [2024-11-18 18:17:31.223534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.223567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.231571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.231618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.239659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.239708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.247700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.247748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.255618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.255665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.263664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.263692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.271700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.271729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.279688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.279717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.287719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.099 [2024-11-18 18:17:31.287748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.099 [2024-11-18 18:17:31.295737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.295765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.303743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.303771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.311852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.311917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.319845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.319909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.327915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.327985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.335930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.335988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.343853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.343903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.351882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.351924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.359907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.359935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.367949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.367983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.375976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.376010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.383969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.384003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.392007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.392041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.400039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.400072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.408041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.408074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.416079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.416112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.424099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.424132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.100 [2024-11-18 18:17:31.432122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.100 [2024-11-18 18:17:31.432151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.440150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.440183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.448164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.448197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.456192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.456225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.464249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.464288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.472311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.472366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.480374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.480434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.488319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.488361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.496297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.496330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.504331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.504364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.512342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.512374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.520381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.520414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.528402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.358 [2024-11-18 18:17:31.528435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.358 [2024-11-18 18:17:31.536409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.536441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.544445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.544478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.552462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.552495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.560492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.560526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.568548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.568581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.576514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.576546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.584569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.584616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.592593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.592655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.600639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.600690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.608681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.608714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.616680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.616713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.624713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.624746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.632713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.632743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.640723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.640752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.648771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.648800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.656769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.656798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.664805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.664836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.672843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.672876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.680860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.680907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.359 [2024-11-18 18:17:31.688851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.359 [2024-11-18 18:17:31.688882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.696946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.696983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.704935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.704985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.712966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.713002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 Running I/O for 5 seconds... 00:10:33.617 [2024-11-18 18:17:31.726532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.726586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.738743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.738798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.753528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.753582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.768178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.768220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.782808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.782844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.797507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.797560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.811994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.812047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.826572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.826633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.840896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.840948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.855649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.855689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.870113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.870165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.884466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.884514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.898655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.898690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.913491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.913532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.928604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.928669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.617 [2024-11-18 18:17:31.940965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.617 [2024-11-18 18:17:31.941001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:31.954904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:31.954940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:31.969432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:31.969473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:31.983993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:31.984031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:31.999418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:31.999475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.012815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.012867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.027330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.027371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.042235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.042287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.056920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.056971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.071471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.071512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.085928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.085963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.100798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.100849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.115012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.115066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.129409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.129445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.144331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.144373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.158886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.158945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.174084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.174137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.188337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.875 [2024-11-18 18:17:32.188389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.875 [2024-11-18 18:17:32.202163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.876 [2024-11-18 18:17:32.202200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.216534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.216571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.230850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.230903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.244484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.244521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.258585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.258631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.273070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.273106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.287250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.287287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.300845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.300883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.314797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.314833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.328686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.328723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.343134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.343172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.357062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.357114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.371795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.371833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.386249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.386286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.400855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.400907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.414945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.414982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.429282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.429327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.444034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.444079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.134 [2024-11-18 18:17:32.459144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.134 [2024-11-18 18:17:32.459196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.474185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.474227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.489317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.489358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.504282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.504334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.518517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.518558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.533634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.533686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.546425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.546465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.561017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.561068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.575301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.575352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.590302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.590343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.605615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.605650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.620427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.620463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.635642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.635688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.648313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.648354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.662998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.663034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.677885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.677935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.693318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.693359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 [2024-11-18 18:17:32.708290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.708332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.392 8679.00 IOPS, 67.80 MiB/s [2024-11-18T17:17:32.729Z] [2024-11-18 18:17:32.723177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.392 [2024-11-18 18:17:32.723216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.738103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.738143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.752174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.752214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.767208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.767248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.782181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.782219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.796513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.796565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.811419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.811457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.826300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.826337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.840871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.840923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.855309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.855360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.869886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.869921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.884792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.884828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.899032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.899083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.913284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.913324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.927151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.927208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.941913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.650 [2024-11-18 18:17:32.941948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.650 [2024-11-18 18:17:32.956849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.651 [2024-11-18 18:17:32.956900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.651 [2024-11-18 18:17:32.971428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.651 [2024-11-18 18:17:32.971469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.651 [2024-11-18 18:17:32.986299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.651 [2024-11-18 18:17:32.986337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.908 [2024-11-18 18:17:33.001113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.908 [2024-11-18 18:17:33.001154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.908 [2024-11-18 18:17:33.016175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.908 [2024-11-18 18:17:33.016216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.908 [2024-11-18 18:17:33.031446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.031496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.043066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.043117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.056882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.056933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.071063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.071115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.085895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.085948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.101383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.101420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.116996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.117033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.131953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.132004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.144469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.144509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.158209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.158250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.172806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.172843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.187141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.187178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.201773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.201810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.216025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.216077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.909 [2024-11-18 18:17:33.231819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.909 [2024-11-18 18:17:33.231870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.246668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.246704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.261191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.261228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.275951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.276003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.290937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.290974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.305664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.305700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.320672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.320708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.335574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.335624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.349527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.349567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.364360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.364401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.378973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.379024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.393549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.393590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.408383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.408424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.422811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.422863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.437213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.437250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.451741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.451778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.466417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.466459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.481233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.481274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.167 [2024-11-18 18:17:33.495457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.167 [2024-11-18 18:17:33.495492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.510347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.510399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.524865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.524927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.539319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.539359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.553523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.553563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.568473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.568513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.582895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.582930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.597880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.597916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.612348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.425 [2024-11-18 18:17:33.612388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.425 [2024-11-18 18:17:33.626726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.626764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.641707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.641745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.655711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.655750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.670916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.670967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.684861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.684898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.699055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.699097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.713969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.714030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 8666.00 IOPS, 67.70 MiB/s [2024-11-18T17:17:33.763Z] [2024-11-18 18:17:33.729645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.729681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.744143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.744185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.426 [2024-11-18 18:17:33.758794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.426 [2024-11-18 18:17:33.758845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.684 [2024-11-18 18:17:33.773777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.684 [2024-11-18 18:17:33.773812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.684 [2024-11-18 18:17:33.788529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.684 [2024-11-18 18:17:33.788580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.684 [2024-11-18 18:17:33.802749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.684 [2024-11-18 18:17:33.802796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.684 [2024-11-18 18:17:33.817931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.817982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.832820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.832856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.847326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.847367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.862134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.862183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.876846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.876882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.891870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.891922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.906135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.906175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.920780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.920816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.935837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.935888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.950335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.950377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.964943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.964979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.979437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.979478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:33.994559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:33.994597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.685 [2024-11-18 18:17:34.010243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.685 [2024-11-18 18:17:34.010284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.025313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.025356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.039956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.039993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.054539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.054580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.069756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.069793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.084315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.084367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.099347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.099388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.113859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.113899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.129081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.129121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.144103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.943 [2024-11-18 18:17:34.144143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.943 [2024-11-18 18:17:34.159110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.159150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.174284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.174325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.189388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.189428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.204322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.204362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.219401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.219442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.234710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.234750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.249969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.250009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.262384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.262424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.944 [2024-11-18 18:17:34.277191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.944 [2024-11-18 18:17:34.277232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.292069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.292109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.307305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.307347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.322013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.322053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.336804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.336844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.351760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.351800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.366827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.366868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.381910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.381950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.395335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.395377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.410511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.410552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.424826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.424869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.439836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.439877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.455495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.455536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.470157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.470197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.484807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.484848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.500021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.500062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.515415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.515455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.203 [2024-11-18 18:17:34.528369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.203 [2024-11-18 18:17:34.528410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.461 [2024-11-18 18:17:34.543293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.543334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.558042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.558084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.573398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.573439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.589027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.589068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.603709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.603750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.618244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.618284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.633344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.633384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.648087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.648127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.663218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.663258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.678249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.678290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.693312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.693352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.709039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.709080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.724332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.724372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 8604.00 IOPS, 67.22 MiB/s [2024-11-18T17:17:34.799Z] [2024-11-18 18:17:34.739751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.739793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.754820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.754861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.770546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.770597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.462 [2024-11-18 18:17:34.785258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.462 [2024-11-18 18:17:34.785300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.800512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.720 [2024-11-18 18:17:34.800554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.815677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.720 [2024-11-18 18:17:34.815732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.830665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.720 [2024-11-18 18:17:34.830706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.845322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.720 [2024-11-18 18:17:34.845363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.860935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.720 [2024-11-18 18:17:34.860976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.720 [2024-11-18 18:17:34.876633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.876674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.892456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.892497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.907853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.907893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.923114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.923163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.934322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.934361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.948328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.948367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.963406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.963446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.978493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.978533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:34.994218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:34.994258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:35.009775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:35.009816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:35.025259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:35.025298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:35.041119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:35.041160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.721 [2024-11-18 18:17:35.056014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.721 [2024-11-18 18:17:35.056054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.071149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.071190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.086025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.086065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.101173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.101213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.115824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.115864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.131086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.131126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.146237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.146277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.162047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.162087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.177486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.177526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.192736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.192776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.206873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.206925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.222165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.222205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.237194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.237234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.252692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.252732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.265081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.265121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.279307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.279348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.979 [2024-11-18 18:17:35.294553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.979 [2024-11-18 18:17:35.294594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.980 [2024-11-18 18:17:35.309736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.980 [2024-11-18 18:17:35.309776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.325546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.325587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.340642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.340683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.355393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.355434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.370448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.370487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.386110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.386150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.402031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.402072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.417312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.417352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.432756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.432796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.448431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.448472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.463768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.463809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.479193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.479234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.494087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.494139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.508719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.508760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.524368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.524408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.539977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.540017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.555404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.555445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.238 [2024-11-18 18:17:35.568774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.238 [2024-11-18 18:17:35.568815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.584679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.584720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.600358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.600400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.616102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.616143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.631009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.631051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.646133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.646173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.661753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.661817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.674875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.674916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.689402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.689443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.704744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.704786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.720631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.720672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 8538.75 IOPS, 66.71 MiB/s [2024-11-18T17:17:35.834Z] [2024-11-18 18:17:35.736112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.736152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.751732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.751772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.766476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.766517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.780884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.780949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.795860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.795900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.810501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.497 [2024-11-18 18:17:35.810541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.497 [2024-11-18 18:17:35.825568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.498 [2024-11-18 18:17:35.825618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.840066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.840108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.854897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.854938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.869822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.869863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.884997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.885037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.900622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.900663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.915669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.915710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.930480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.930521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.945775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.945816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.960076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.960116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.974623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.974674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:35.988995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:35.989035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.003825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.003865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.018988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.019028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.034330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.034370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.049577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.049627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.064445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.064486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.756 [2024-11-18 18:17:36.079316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.756 [2024-11-18 18:17:36.079357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.014 [2024-11-18 18:17:36.094000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.014 [2024-11-18 18:17:36.094041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.014 [2024-11-18 18:17:36.108776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.108819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.124908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.124950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.140378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.140419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.155886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.155927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.168031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.168072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.183132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.183173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.197975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.198013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.212383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.212435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.226127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.226179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.240429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.240467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.254848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.254886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.269189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.269227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.283474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.283512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.297964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.298002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.312173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.312210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.325835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.325872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.015 [2024-11-18 18:17:36.340407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.015 [2024-11-18 18:17:36.340445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.354839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.354878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.369316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.369355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.383228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.383281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.397481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.397520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.411729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.411766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.425773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.425811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.440311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.440352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.454941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.454978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.469914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.469965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.484836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.484872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.499577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.499628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.514282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.514323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.529156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.529196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.543867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.543918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.558544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.558585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.573166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.573207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.587735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.587771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.273 [2024-11-18 18:17:36.602033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.273 [2024-11-18 18:17:36.602083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.616156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.616197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.631173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.631214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.645648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.645688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.660904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.660945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.676287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.676329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.691804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.691846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.707210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.707251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.722737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.722793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 8554.80 IOPS, 66.83 MiB/s [2024-11-18T17:17:36.869Z] [2024-11-18 18:17:36.737384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.737425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 [2024-11-18 18:17:36.744446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.532 [2024-11-18 18:17:36.744485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.532 00:10:38.532 Latency(us) 00:10:38.532 [2024-11-18T17:17:36.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.533 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:38.533 Nvme1n1 : 5.01 8556.16 66.84 0.00 0.00 14934.49 4320.52 23690.05 00:10:38.533 [2024-11-18T17:17:36.870Z] =================================================================================================================== 00:10:38.533 [2024-11-18T17:17:36.870Z] Total : 8556.16 66.84 0.00 0.00 14934.49 4320.52 23690.05 00:10:38.533 [2024-11-18 18:17:36.752307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.752344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.760341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.760379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.768357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.768393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.776362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.776396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.784402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.784438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.792425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.792472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.800564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.800643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.808664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.808735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.816487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.816527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.824553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.824587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.832532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.832567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.840539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.840574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.848578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.848621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.856596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.856639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.533 [2024-11-18 18:17:36.864602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.533 [2024-11-18 18:17:36.864643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.872654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.872688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.880663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.880697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.888700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.888738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.896848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.896915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.904882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.904949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.912866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.912926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.920785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.920818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.928794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.928828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.936833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.936866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.944836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.944880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.952876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.952910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.960910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.960945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.968939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.968974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.976947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.976982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.984974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.985008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:36.993011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:36.993045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.001032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.001067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.009020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.009054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.017060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.017094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.025079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.025114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.033083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.033117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.041129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.041162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.049154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.049188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.057155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.057190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.065293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.065347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.081406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.081473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.089270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.089305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.097300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.097333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.105315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.105358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.113338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.113371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.792 [2024-11-18 18:17:37.121361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.792 [2024-11-18 18:17:37.121393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.129476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.129538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.137576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.137659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.145556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.145636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.153579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.153647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.161468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.161501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.169475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.169508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.177520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.177553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.185538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.185572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.193560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.193593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.201587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.201630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.209588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.209630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.217636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.217670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.225669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.225703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.233668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.233701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.241706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.241739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.249729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.249776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.257738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.257773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.265776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.265809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.273773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.273806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.281813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.281847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.289841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.289874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.297871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.297904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.305884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.305916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.314034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.314098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.322039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.322105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.329975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.330009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.337964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.337999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.346008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.346041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.354024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.354058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.362036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.362069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.370074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.370107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.378105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.378140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.051 [2024-11-18 18:17:37.386103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.051 [2024-11-18 18:17:37.386136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.394170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.394204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.402158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.402192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.410194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.410228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.418412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.418446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.426230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.426264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.434264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.434297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.442406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.442471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.450297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.450332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.458342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.458375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.466342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.466375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.474378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.474411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.482386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.482430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.490405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.490434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.498415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.498443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.506439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.506466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.514454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.514483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.522485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.310 [2024-11-18 18:17:37.522513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.310 [2024-11-18 18:17:37.530618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.530673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.538646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.538696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.546567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.546619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.554563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.554613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.562634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.562665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.570648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.570678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.578659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.578688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.586713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.586743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.594726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.594757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.602761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.602790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.610752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.610782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.618776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.618811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.626849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.626882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.634884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.634929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.311 [2024-11-18 18:17:37.642851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.311 [2024-11-18 18:17:37.642882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2874382) - No such process 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2874382 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 delay0 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.569 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:39.569 [2024-11-18 18:17:37.873815] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:46.123 Initializing NVMe Controllers 00:10:46.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:46.123 Initialization complete. Launching workers. 00:10:46.123 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 47 00:10:46.123 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 334, failed to submit 33 00:10:46.123 success 136, unsuccessful 198, failed 0 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.123 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.123 rmmod nvme_tcp 00:10:46.123 rmmod nvme_fabrics 00:10:46.123 rmmod nvme_keyring 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2872888 ']' 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2872888 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2872888 ']' 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2872888 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2872888 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2872888' 00:10:46.123 killing process with pid 2872888 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2872888 00:10:46.123 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2872888 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.058 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.592 00:10:49.592 real 0m31.845s 00:10:49.592 user 0m48.095s 00:10:49.592 sys 0m7.796s 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.592 ************************************ 00:10:49.592 END TEST nvmf_zcopy 00:10:49.592 ************************************ 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.592 ************************************ 00:10:49.592 START TEST nvmf_nmic 00:10:49.592 ************************************ 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.592 * Looking for test storage... 00:10:49.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.592 --rc genhtml_branch_coverage=1 00:10:49.592 --rc genhtml_function_coverage=1 00:10:49.592 --rc genhtml_legend=1 00:10:49.592 --rc geninfo_all_blocks=1 00:10:49.592 --rc geninfo_unexecuted_blocks=1 00:10:49.592 00:10:49.592 ' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.592 --rc genhtml_branch_coverage=1 00:10:49.592 --rc genhtml_function_coverage=1 00:10:49.592 --rc genhtml_legend=1 00:10:49.592 --rc geninfo_all_blocks=1 00:10:49.592 --rc geninfo_unexecuted_blocks=1 00:10:49.592 00:10:49.592 ' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.592 --rc genhtml_branch_coverage=1 00:10:49.592 --rc genhtml_function_coverage=1 00:10:49.592 --rc genhtml_legend=1 00:10:49.592 --rc geninfo_all_blocks=1 00:10:49.592 --rc geninfo_unexecuted_blocks=1 00:10:49.592 00:10:49.592 ' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.592 --rc genhtml_branch_coverage=1 00:10:49.592 --rc genhtml_function_coverage=1 00:10:49.592 --rc genhtml_legend=1 00:10:49.592 --rc geninfo_all_blocks=1 00:10:49.592 --rc geninfo_unexecuted_blocks=1 00:10:49.592 00:10:49.592 ' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.592 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.593 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.494 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:10:51.495 00:10:51.495 --- 10.0.0.2 ping statistics --- 00:10:51.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.495 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:10:51.495 00:10:51.495 --- 10.0.0.1 ping statistics --- 00:10:51.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.495 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.495 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2878038 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2878038 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2878038 ']' 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.754 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 [2024-11-18 18:17:49.934462] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:51.754 [2024-11-18 18:17:49.934631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.754 [2024-11-18 18:17:50.085855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.013 [2024-11-18 18:17:50.228017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.013 [2024-11-18 18:17:50.228100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.013 [2024-11-18 18:17:50.228127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.013 [2024-11-18 18:17:50.228152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.013 [2024-11-18 18:17:50.228178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.013 [2024-11-18 18:17:50.231062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.013 [2024-11-18 18:17:50.231136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.013 [2024-11-18 18:17:50.231234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.013 [2024-11-18 18:17:50.231240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 [2024-11-18 18:17:50.951842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 Malloc0 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 [2024-11-18 18:17:51.068045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:52.946 test case1: single bdev can't be used in multiple subsystems 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.946 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.946 [2024-11-18 18:17:51.091726] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:52.946 [2024-11-18 18:17:51.091768] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:52.946 [2024-11-18 18:17:51.091801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.946 request: 00:10:52.946 { 00:10:52.946 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:52.946 "namespace": { 00:10:52.946 "bdev_name": "Malloc0", 00:10:52.946 "no_auto_visible": false 00:10:52.946 }, 00:10:52.946 "method": "nvmf_subsystem_add_ns", 00:10:52.946 "req_id": 1 00:10:52.947 } 00:10:52.947 Got JSON-RPC error response 00:10:52.947 response: 00:10:52.947 { 00:10:52.947 "code": -32602, 00:10:52.947 "message": "Invalid parameters" 00:10:52.947 } 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:52.947 Adding namespace failed - expected result. 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:52.947 test case2: host connect to nvmf target in multiple paths 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.947 [2024-11-18 18:17:51.099868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.947 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.513 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:54.079 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.079 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:54.079 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.079 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:54.079 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:56.605 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.605 [global] 00:10:56.605 thread=1 00:10:56.605 invalidate=1 00:10:56.605 rw=write 00:10:56.605 time_based=1 00:10:56.605 runtime=1 00:10:56.605 ioengine=libaio 00:10:56.605 direct=1 00:10:56.605 bs=4096 00:10:56.605 iodepth=1 00:10:56.605 norandommap=0 00:10:56.605 numjobs=1 00:10:56.605 00:10:56.605 verify_dump=1 00:10:56.605 verify_backlog=512 00:10:56.605 verify_state_save=0 00:10:56.605 do_verify=1 00:10:56.605 verify=crc32c-intel 00:10:56.605 [job0] 00:10:56.605 filename=/dev/nvme0n1 00:10:56.605 Could not set queue depth (nvme0n1) 00:10:56.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.605 fio-3.35 00:10:56.605 Starting 1 thread 00:10:57.538 00:10:57.539 job0: (groupid=0, jobs=1): err= 0: pid=2878689: Mon Nov 18 18:17:55 2024 00:10:57.539 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:10:57.539 slat (nsec): min=6265, max=33258, avg=25077.09, stdev=9278.53 00:10:57.539 clat (usec): min=40913, max=41379, avg=40984.51, stdev=95.37 00:10:57.539 lat (usec): min=40946, max=41385, avg=41009.59, stdev=89.62 00:10:57.539 clat percentiles (usec): 00:10:57.539 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:57.539 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:57.539 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:57.539 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:57.539 | 99.99th=[41157] 00:10:57.539 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:57.539 slat (nsec): min=5266, max=34872, avg=6355.85, stdev=2025.56 00:10:57.539 clat (usec): min=176, max=1164, avg=202.14, stdev=47.55 00:10:57.539 lat (usec): min=181, max=1170, avg=208.50, stdev=47.99 00:10:57.539 clat percentiles (usec): 00:10:57.539 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 190], 00:10:57.539 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 198], 00:10:57.539 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 245], 00:10:57.539 | 99.00th=[ 251], 99.50th=[ 293], 99.90th=[ 1172], 99.95th=[ 1172], 00:10:57.539 | 99.99th=[ 1172] 00:10:57.539 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:57.539 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:57.539 lat (usec) : 250=95.13%, 500=0.37%, 750=0.19% 00:10:57.539 lat (msec) : 2=0.19%, 50=4.12% 00:10:57.539 cpu : usr=0.10%, sys=0.30%, ctx=534, majf=0, minf=1 00:10:57.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.539 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.539 00:10:57.539 Run status group 0 (all jobs): 00:10:57.539 READ: bw=87.1KiB/s (89.2kB/s), 87.1KiB/s-87.1KiB/s (89.2kB/s-89.2kB/s), io=88.0KiB (90.1kB), run=1010-1010msec 00:10:57.539 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:10:57.539 00:10:57.539 Disk stats (read/write): 00:10:57.539 nvme0n1: ios=69/512, merge=0/0, ticks=805/102, in_queue=907, util=91.78% 00:10:57.539 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.797 rmmod nvme_tcp 00:10:57.797 rmmod nvme_fabrics 00:10:57.797 rmmod nvme_keyring 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2878038 ']' 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2878038 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2878038 ']' 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2878038 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2878038 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2878038' 00:10:57.797 killing process with pid 2878038 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2878038 00:10:57.797 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2878038 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.230 18:17:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.133 00:11:01.133 real 0m11.995s 00:11:01.133 user 0m28.609s 00:11:01.133 sys 0m2.690s 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 ************************************ 00:11:01.133 END TEST nvmf_nmic 00:11:01.133 ************************************ 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.133 18:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.392 ************************************ 00:11:01.392 START TEST nvmf_fio_target 00:11:01.392 ************************************ 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.392 * Looking for test storage... 00:11:01.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.392 --rc geninfo_all_blocks=1 00:11:01.392 --rc geninfo_unexecuted_blocks=1 00:11:01.392 00:11:01.392 ' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.392 --rc geninfo_all_blocks=1 00:11:01.392 --rc geninfo_unexecuted_blocks=1 00:11:01.392 00:11:01.392 ' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.392 --rc geninfo_all_blocks=1 00:11:01.392 --rc geninfo_unexecuted_blocks=1 00:11:01.392 00:11:01.392 ' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.392 --rc geninfo_all_blocks=1 00:11:01.392 --rc geninfo_unexecuted_blocks=1 00:11:01.392 00:11:01.392 ' 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.392 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.393 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.292 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.293 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:11:03.551 00:11:03.551 --- 10.0.0.2 ping statistics --- 00:11:03.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.551 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:03.551 00:11:03.551 --- 10.0.0.1 ping statistics --- 00:11:03.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.551 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2881011 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2881011 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2881011 ']' 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.551 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.551 [2024-11-18 18:18:01.851888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:03.551 [2024-11-18 18:18:01.852068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.809 [2024-11-18 18:18:01.997555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.809 [2024-11-18 18:18:02.141828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.809 [2024-11-18 18:18:02.141921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.809 [2024-11-18 18:18:02.141949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.809 [2024-11-18 18:18:02.141975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.809 [2024-11-18 18:18:02.141996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.809 [2024-11-18 18:18:02.144977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.809 [2024-11-18 18:18:02.145047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.809 [2024-11-18 18:18:02.145105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.809 [2024-11-18 18:18:02.145110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.742 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.000 [2024-11-18 18:18:03.087500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.000 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.258 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:05.258 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.516 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:05.516 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.080 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:06.080 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.338 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:06.338 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:06.596 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.854 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:06.854 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.419 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:07.419 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.677 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:07.677 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:07.935 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.193 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.193 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.450 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.450 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.708 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.966 [2024-11-18 18:18:07.200448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.966 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:09.223 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:09.481 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:10.415 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:12.314 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:12.314 [global] 00:11:12.314 thread=1 00:11:12.314 invalidate=1 00:11:12.314 rw=write 00:11:12.314 time_based=1 00:11:12.314 runtime=1 00:11:12.314 ioengine=libaio 00:11:12.314 direct=1 00:11:12.315 bs=4096 00:11:12.315 iodepth=1 00:11:12.315 norandommap=0 00:11:12.315 numjobs=1 00:11:12.315 00:11:12.315 verify_dump=1 00:11:12.315 verify_backlog=512 00:11:12.315 verify_state_save=0 00:11:12.315 do_verify=1 00:11:12.315 verify=crc32c-intel 00:11:12.315 [job0] 00:11:12.315 filename=/dev/nvme0n1 00:11:12.315 [job1] 00:11:12.315 filename=/dev/nvme0n2 00:11:12.315 [job2] 00:11:12.315 filename=/dev/nvme0n3 00:11:12.315 [job3] 00:11:12.315 filename=/dev/nvme0n4 00:11:12.315 Could not set queue depth (nvme0n1) 00:11:12.315 Could not set queue depth (nvme0n2) 00:11:12.315 Could not set queue depth (nvme0n3) 00:11:12.315 Could not set queue depth (nvme0n4) 00:11:12.572 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.572 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.572 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.572 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.572 fio-3.35 00:11:12.572 Starting 4 threads 00:11:13.947 00:11:13.947 job0: (groupid=0, jobs=1): err= 0: pid=2882737: Mon Nov 18 18:18:12 2024 00:11:13.947 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:13.947 slat (nsec): min=7371, max=60694, avg=16009.57, stdev=6418.11 00:11:13.947 clat (usec): min=238, max=41503, avg=334.16, stdev=1052.08 00:11:13.947 lat (usec): min=247, max=41514, avg=350.16, stdev=1051.90 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 281], 00:11:13.947 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:11:13.947 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 437], 00:11:13.947 | 99.00th=[ 457], 99.50th=[ 461], 99.90th=[ 545], 99.95th=[41681], 00:11:13.947 | 99.99th=[41681] 00:11:13.947 write: IOPS=1799, BW=7197KiB/s (7370kB/s)(7204KiB/1001msec); 0 zone resets 00:11:13.947 slat (nsec): min=7777, max=73269, avg=19921.01, stdev=9072.96 00:11:13.947 clat (usec): min=173, max=1598, avg=227.68, stdev=41.70 00:11:13.947 lat (usec): min=183, max=1609, avg=247.60, stdev=42.54 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 210], 00:11:13.947 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:11:13.947 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:11:13.947 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 396], 99.95th=[ 1598], 00:11:13.947 | 99.99th=[ 1598] 00:11:13.947 bw ( KiB/s): min= 8192, max= 8192, per=44.12%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.947 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.947 lat (usec) : 250=47.35%, 500=52.53%, 750=0.06% 00:11:13.947 lat (msec) : 2=0.03%, 50=0.03% 00:11:13.947 cpu : usr=3.80%, sys=8.30%, ctx=3339, majf=0, minf=1 00:11:13.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 issued rwts: total=1536,1801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.947 job1: (groupid=0, jobs=1): err= 0: pid=2882739: Mon Nov 18 18:18:12 2024 00:11:13.947 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:13.947 slat (nsec): min=4808, max=73074, avg=17082.78, stdev=10026.65 00:11:13.947 clat (usec): min=224, max=41454, avg=328.30, stdev=1050.99 00:11:13.947 lat (usec): min=233, max=41462, avg=345.38, stdev=1051.03 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:11:13.947 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 306], 00:11:13.947 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 375], 00:11:13.947 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[41681], 00:11:13.947 | 99.99th=[41681] 00:11:13.947 write: IOPS=2000, BW=8004KiB/s (8196kB/s)(8012KiB/1001msec); 0 zone resets 00:11:13.947 slat (nsec): min=5983, max=42089, avg=12905.90, stdev=5373.57 00:11:13.947 clat (usec): min=170, max=1994, avg=213.78, stdev=48.25 00:11:13.947 lat (usec): min=179, max=2003, avg=226.69, stdev=48.09 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 192], 00:11:13.947 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:11:13.947 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 251], 00:11:13.947 | 99.00th=[ 269], 99.50th=[ 306], 99.90th=[ 494], 99.95th=[ 545], 00:11:13.947 | 99.99th=[ 1991] 00:11:13.947 bw ( KiB/s): min= 8192, max= 8192, per=44.12%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.947 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.947 lat (usec) : 250=56.32%, 500=43.60%, 750=0.03% 00:11:13.947 lat (msec) : 2=0.03%, 50=0.03% 00:11:13.947 cpu : usr=3.00%, sys=5.70%, ctx=3539, majf=0, minf=2 00:11:13.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 issued rwts: total=1536,2003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.947 job2: (groupid=0, jobs=1): err= 0: pid=2882744: Mon Nov 18 18:18:12 2024 00:11:13.947 read: IOPS=225, BW=900KiB/s (922kB/s)(920KiB/1022msec) 00:11:13.947 slat (nsec): min=7854, max=38029, avg=16726.74, stdev=5758.00 00:11:13.947 clat (usec): min=294, max=41141, avg=3748.49, stdev=11198.34 00:11:13.947 lat (usec): min=304, max=41150, avg=3765.22, stdev=11198.57 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:11:13.947 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 383], 00:11:13.947 | 70.00th=[ 400], 80.00th=[ 490], 90.00th=[ 553], 95.00th=[41157], 00:11:13.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:13.947 | 99.99th=[41157] 00:11:13.947 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:13.947 slat (nsec): min=7770, max=57372, avg=18810.71, stdev=8680.02 00:11:13.947 clat (usec): min=222, max=1473, avg=278.56, stdev=60.30 00:11:13.947 lat (usec): min=231, max=1483, avg=297.37, stdev=60.44 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:11:13.947 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:11:13.947 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 326], 00:11:13.947 | 99.00th=[ 388], 99.50th=[ 490], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:13.947 | 99.99th=[ 1467] 00:11:13.947 bw ( KiB/s): min= 4096, max= 4096, per=22.06%, avg=4096.00, stdev= 0.00, samples=1 00:11:13.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:13.947 lat (usec) : 250=8.22%, 500=86.66%, 750=1.89%, 1000=0.13% 00:11:13.947 lat (msec) : 2=0.54%, 50=2.56% 00:11:13.947 cpu : usr=1.18%, sys=1.47%, ctx=742, majf=0, minf=1 00:11:13.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 issued rwts: total=230,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.947 job3: (groupid=0, jobs=1): err= 0: pid=2882745: Mon Nov 18 18:18:12 2024 00:11:13.947 read: IOPS=149, BW=596KiB/s (610kB/s)(620KiB/1040msec) 00:11:13.947 slat (nsec): min=9752, max=37966, avg=20555.77, stdev=6348.35 00:11:13.947 clat (usec): min=291, max=41319, avg=5641.87, stdev=13632.50 00:11:13.947 lat (usec): min=309, max=41343, avg=5662.42, stdev=13630.96 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:11:13.947 | 30.00th=[ 351], 40.00th=[ 396], 50.00th=[ 437], 60.00th=[ 474], 00:11:13.947 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[40633], 95.00th=[41157], 00:11:13.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:13.947 | 99.99th=[41157] 00:11:13.947 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:11:13.947 slat (nsec): min=8087, max=48636, avg=19982.36, stdev=8799.95 00:11:13.947 clat (usec): min=203, max=1488, avg=290.19, stdev=89.11 00:11:13.947 lat (usec): min=214, max=1498, avg=310.18, stdev=89.53 00:11:13.947 clat percentiles (usec): 00:11:13.947 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 253], 00:11:13.947 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:11:13.947 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 338], 95.00th=[ 437], 00:11:13.947 | 99.00th=[ 529], 99.50th=[ 889], 99.90th=[ 1483], 99.95th=[ 1483], 00:11:13.947 | 99.99th=[ 1483] 00:11:13.947 bw ( KiB/s): min= 4096, max= 4096, per=22.06%, avg=4096.00, stdev= 0.00, samples=1 00:11:13.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:13.947 lat (usec) : 250=12.59%, 500=81.41%, 750=2.40%, 1000=0.30% 00:11:13.947 lat (msec) : 2=0.30%, 50=3.00% 00:11:13.947 cpu : usr=0.87%, sys=1.64%, ctx=669, majf=0, minf=1 00:11:13.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.947 issued rwts: total=155,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.947 00:11:13.947 Run status group 0 (all jobs): 00:11:13.947 READ: bw=13.0MiB/s (13.6MB/s), 596KiB/s-6138KiB/s (610kB/s-6285kB/s), io=13.5MiB (14.2MB), run=1001-1040msec 00:11:13.947 WRITE: bw=18.1MiB/s (19.0MB/s), 1969KiB/s-8004KiB/s (2016kB/s-8196kB/s), io=18.9MiB (19.8MB), run=1001-1040msec 00:11:13.947 00:11:13.947 Disk stats (read/write): 00:11:13.947 nvme0n1: ios=1382/1536, merge=0/0, ticks=504/308, in_queue=812, util=85.87% 00:11:13.947 nvme0n2: ios=1485/1536, merge=0/0, ticks=533/305, in_queue=838, util=90.96% 00:11:13.947 nvme0n3: ios=282/512, merge=0/0, ticks=743/137, in_queue=880, util=94.99% 00:11:13.947 nvme0n4: ios=207/512, merge=0/0, ticks=964/140, in_queue=1104, util=94.53% 00:11:13.947 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:13.947 [global] 00:11:13.947 thread=1 00:11:13.947 invalidate=1 00:11:13.947 rw=randwrite 00:11:13.947 time_based=1 00:11:13.947 runtime=1 00:11:13.948 ioengine=libaio 00:11:13.948 direct=1 00:11:13.948 bs=4096 00:11:13.948 iodepth=1 00:11:13.948 norandommap=0 00:11:13.948 numjobs=1 00:11:13.948 00:11:13.948 verify_dump=1 00:11:13.948 verify_backlog=512 00:11:13.948 verify_state_save=0 00:11:13.948 do_verify=1 00:11:13.948 verify=crc32c-intel 00:11:13.948 [job0] 00:11:13.948 filename=/dev/nvme0n1 00:11:13.948 [job1] 00:11:13.948 filename=/dev/nvme0n2 00:11:13.948 [job2] 00:11:13.948 filename=/dev/nvme0n3 00:11:13.948 [job3] 00:11:13.948 filename=/dev/nvme0n4 00:11:13.948 Could not set queue depth (nvme0n1) 00:11:13.948 Could not set queue depth (nvme0n2) 00:11:13.948 Could not set queue depth (nvme0n3) 00:11:13.948 Could not set queue depth (nvme0n4) 00:11:13.948 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.948 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.948 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.948 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.948 fio-3.35 00:11:13.948 Starting 4 threads 00:11:15.321 00:11:15.321 job0: (groupid=0, jobs=1): err= 0: pid=2882972: Mon Nov 18 18:18:13 2024 00:11:15.321 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:11:15.321 slat (nsec): min=12173, max=36880, avg=26777.19, stdev=9982.54 00:11:15.321 clat (usec): min=40730, max=41976, avg=41139.58, stdev=411.67 00:11:15.321 lat (usec): min=40752, max=41991, avg=41166.36, stdev=410.29 00:11:15.321 clat percentiles (usec): 00:11:15.321 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.321 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.321 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:15.321 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.321 | 99.99th=[42206] 00:11:15.321 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:15.321 slat (nsec): min=7598, max=57574, avg=17321.80, stdev=9364.62 00:11:15.321 clat (usec): min=188, max=739, avg=270.63, stdev=53.71 00:11:15.321 lat (usec): min=198, max=755, avg=287.95, stdev=55.37 00:11:15.321 clat percentiles (usec): 00:11:15.321 | 1.00th=[ 202], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 237], 00:11:15.321 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:11:15.321 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 367], 00:11:15.321 | 99.00th=[ 486], 99.50th=[ 578], 99.90th=[ 742], 99.95th=[ 742], 00:11:15.321 | 99.99th=[ 742] 00:11:15.321 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.321 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.321 lat (usec) : 250=32.46%, 500=62.85%, 750=0.75% 00:11:15.321 lat (msec) : 50=3.94% 00:11:15.321 cpu : usr=0.89%, sys=0.79%, ctx=535, majf=0, minf=1 00:11:15.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.321 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.321 job1: (groupid=0, jobs=1): err= 0: pid=2882973: Mon Nov 18 18:18:13 2024 00:11:15.321 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:11:15.321 slat (nsec): min=13439, max=36690, avg=28383.81, stdev=9441.07 00:11:15.321 clat (usec): min=40603, max=41055, avg=40941.13, stdev=87.68 00:11:15.321 lat (usec): min=40623, max=41069, avg=40969.51, stdev=88.27 00:11:15.321 clat percentiles (usec): 00:11:15.321 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:15.321 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.321 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:15.321 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.321 | 99.99th=[41157] 00:11:15.321 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:15.321 slat (nsec): min=7647, max=66178, avg=20184.99, stdev=10554.37 00:11:15.321 clat (usec): min=189, max=923, avg=314.02, stdev=93.58 00:11:15.321 lat (usec): min=198, max=952, avg=334.20, stdev=93.95 00:11:15.321 clat percentiles (usec): 00:11:15.321 | 1.00th=[ 206], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 243], 00:11:15.321 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 297], 00:11:15.321 | 70.00th=[ 326], 80.00th=[ 396], 90.00th=[ 461], 95.00th=[ 490], 00:11:15.321 | 99.00th=[ 537], 99.50th=[ 668], 99.90th=[ 922], 99.95th=[ 922], 00:11:15.321 | 99.99th=[ 922] 00:11:15.321 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.321 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.321 lat (usec) : 250=26.27%, 500=66.04%, 750=3.38%, 1000=0.38% 00:11:15.321 lat (msec) : 50=3.94% 00:11:15.321 cpu : usr=0.77%, sys=1.26%, ctx=536, majf=0, minf=1 00:11:15.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.321 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.321 job2: (groupid=0, jobs=1): err= 0: pid=2882979: Mon Nov 18 18:18:13 2024 00:11:15.321 read: IOPS=20, BW=80.9KiB/s (82.9kB/s)(84.0KiB/1038msec) 00:11:15.321 slat (nsec): min=6382, max=34457, avg=25206.67, stdev=10438.22 00:11:15.321 clat (usec): min=40919, max=41998, avg=41103.86, stdev=355.25 00:11:15.321 lat (usec): min=40953, max=42012, avg=41129.06, stdev=350.24 00:11:15.321 clat percentiles (usec): 00:11:15.321 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.321 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.321 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:15.322 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.322 | 99.99th=[42206] 00:11:15.322 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:11:15.322 slat (nsec): min=6109, max=64320, avg=18120.51, stdev=9724.90 00:11:15.322 clat (usec): min=182, max=1150, avg=315.65, stdev=105.27 00:11:15.322 lat (usec): min=189, max=1162, avg=333.77, stdev=103.80 00:11:15.322 clat percentiles (usec): 00:11:15.322 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:11:15.322 | 30.00th=[ 235], 40.00th=[ 265], 50.00th=[ 293], 60.00th=[ 334], 00:11:15.322 | 70.00th=[ 363], 80.00th=[ 408], 90.00th=[ 465], 95.00th=[ 486], 00:11:15.322 | 99.00th=[ 562], 99.50th=[ 644], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:15.322 | 99.99th=[ 1156] 00:11:15.322 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.322 lat (usec) : 250=31.71%, 500=61.16%, 750=3.00% 00:11:15.322 lat (msec) : 2=0.19%, 50=3.94% 00:11:15.322 cpu : usr=0.48%, sys=0.87%, ctx=535, majf=0, minf=1 00:11:15.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.322 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.322 job3: (groupid=0, jobs=1): err= 0: pid=2882985: Mon Nov 18 18:18:13 2024 00:11:15.322 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:11:15.322 slat (nsec): min=7076, max=33548, avg=24313.91, stdev=10322.27 00:11:15.322 clat (usec): min=40898, max=41074, avg=40968.40, stdev=33.75 00:11:15.322 lat (usec): min=40931, max=41087, avg=40992.71, stdev=28.64 00:11:15.322 clat percentiles (usec): 00:11:15.322 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.322 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.322 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:15.322 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.322 | 99.99th=[41157] 00:11:15.322 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:11:15.322 slat (nsec): min=5951, max=41685, avg=12652.96, stdev=6462.94 00:11:15.322 clat (usec): min=191, max=491, avg=238.58, stdev=45.69 00:11:15.322 lat (usec): min=198, max=502, avg=251.23, stdev=45.89 00:11:15.322 clat percentiles (usec): 00:11:15.322 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:15.322 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:11:15.322 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 285], 00:11:15.322 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 494], 99.95th=[ 494], 00:11:15.322 | 99.99th=[ 494] 00:11:15.322 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.322 lat (usec) : 250=81.09%, 500=14.79% 00:11:15.322 lat (msec) : 50=4.12% 00:11:15.322 cpu : usr=0.29%, sys=0.68%, ctx=535, majf=0, minf=2 00:11:15.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.322 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.322 00:11:15.322 Run status group 0 (all jobs): 00:11:15.322 READ: bw=328KiB/s (335kB/s), 80.9KiB/s-85.2KiB/s (82.9kB/s-87.2kB/s), io=340KiB (348kB), run=1016-1038msec 00:11:15.322 WRITE: bw=7892KiB/s (8082kB/s), 1973KiB/s-2016KiB/s (2020kB/s-2064kB/s), io=8192KiB (8389kB), run=1016-1038msec 00:11:15.322 00:11:15.322 Disk stats (read/write): 00:11:15.322 nvme0n1: ios=40/512, merge=0/0, ticks=1563/132, in_queue=1695, util=85.87% 00:11:15.322 nvme0n2: ios=72/512, merge=0/0, ticks=742/147, in_queue=889, util=91.57% 00:11:15.322 nvme0n3: ios=79/512, merge=0/0, ticks=765/158, in_queue=923, util=95.32% 00:11:15.322 nvme0n4: ios=74/512, merge=0/0, ticks=784/119, in_queue=903, util=95.80% 00:11:15.322 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:15.322 [global] 00:11:15.322 thread=1 00:11:15.322 invalidate=1 00:11:15.322 rw=write 00:11:15.322 time_based=1 00:11:15.322 runtime=1 00:11:15.322 ioengine=libaio 00:11:15.322 direct=1 00:11:15.322 bs=4096 00:11:15.322 iodepth=128 00:11:15.322 norandommap=0 00:11:15.322 numjobs=1 00:11:15.322 00:11:15.322 verify_dump=1 00:11:15.322 verify_backlog=512 00:11:15.322 verify_state_save=0 00:11:15.322 do_verify=1 00:11:15.322 verify=crc32c-intel 00:11:15.322 [job0] 00:11:15.322 filename=/dev/nvme0n1 00:11:15.322 [job1] 00:11:15.322 filename=/dev/nvme0n2 00:11:15.322 [job2] 00:11:15.322 filename=/dev/nvme0n3 00:11:15.322 [job3] 00:11:15.322 filename=/dev/nvme0n4 00:11:15.322 Could not set queue depth (nvme0n1) 00:11:15.322 Could not set queue depth (nvme0n2) 00:11:15.322 Could not set queue depth (nvme0n3) 00:11:15.322 Could not set queue depth (nvme0n4) 00:11:15.580 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.580 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.580 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.580 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.580 fio-3.35 00:11:15.580 Starting 4 threads 00:11:16.954 00:11:16.954 job0: (groupid=0, jobs=1): err= 0: pid=2883323: Mon Nov 18 18:18:14 2024 00:11:16.954 read: IOPS=4148, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:11:16.954 slat (usec): min=2, max=12200, avg=118.14, stdev=662.53 00:11:16.954 clat (usec): min=703, max=30645, avg=14723.31, stdev=3487.94 00:11:16.954 lat (usec): min=3703, max=30649, avg=14841.45, stdev=3484.87 00:11:16.954 clat percentiles (usec): 00:11:16.954 | 1.00th=[ 7373], 5.00th=[10683], 10.00th=[11994], 20.00th=[12911], 00:11:16.954 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:11:16.954 | 70.00th=[15008], 80.00th=[16057], 90.00th=[18744], 95.00th=[23200], 00:11:16.954 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30540], 99.95th=[30540], 00:11:16.954 | 99.99th=[30540] 00:11:16.954 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:16.954 slat (usec): min=3, max=9730, avg=103.26, stdev=545.37 00:11:16.954 clat (usec): min=889, max=33029, avg=14317.11, stdev=3118.36 00:11:16.954 lat (usec): min=907, max=33043, avg=14420.37, stdev=3124.68 00:11:16.954 clat percentiles (usec): 00:11:16.954 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11469], 20.00th=[12387], 00:11:16.954 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13698], 60.00th=[14222], 00:11:16.954 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16712], 95.00th=[19530], 00:11:16.954 | 99.00th=[27132], 99.50th=[31065], 99.90th=[32900], 99.95th=[32900], 00:11:16.954 | 99.99th=[32900] 00:11:16.954 bw ( KiB/s): min=18003, max=18320, per=31.18%, avg=18161.50, stdev=224.15, samples=2 00:11:16.954 iops : min= 4500, max= 4580, avg=4540.00, stdev=56.57, samples=2 00:11:16.954 lat (usec) : 750=0.01%, 1000=0.02% 00:11:16.954 lat (msec) : 4=0.32%, 10=2.22%, 20=91.41%, 50=6.01% 00:11:16.954 cpu : usr=3.99%, sys=5.69%, ctx=423, majf=0, minf=1 00:11:16.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.954 issued rwts: total=4161,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.954 job1: (groupid=0, jobs=1): err= 0: pid=2883324: Mon Nov 18 18:18:14 2024 00:11:16.954 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:11:16.954 slat (usec): min=2, max=20201, avg=114.71, stdev=956.95 00:11:16.954 clat (usec): min=1996, max=59269, avg=15529.33, stdev=7571.01 00:11:16.954 lat (usec): min=2003, max=59285, avg=15644.04, stdev=7650.41 00:11:16.954 clat percentiles (usec): 00:11:16.954 | 1.00th=[ 2376], 5.00th=[ 7308], 10.00th=[10290], 20.00th=[11994], 00:11:16.954 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[14222], 00:11:16.954 | 70.00th=[15270], 80.00th=[17433], 90.00th=[23200], 95.00th=[34341], 00:11:16.954 | 99.00th=[46924], 99.50th=[46924], 99.90th=[51119], 99.95th=[53740], 00:11:16.954 | 99.99th=[59507] 00:11:16.954 write: IOPS=4115, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1005msec); 0 zone resets 00:11:16.954 slat (usec): min=3, max=16508, avg=104.02, stdev=797.90 00:11:16.954 clat (usec): min=230, max=48288, avg=15412.14, stdev=7549.51 00:11:16.954 lat (usec): min=897, max=48295, avg=15516.16, stdev=7601.74 00:11:16.954 clat percentiles (usec): 00:11:16.954 | 1.00th=[ 3916], 5.00th=[ 6849], 10.00th=[ 8717], 20.00th=[10683], 00:11:16.954 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[14877], 00:11:16.954 | 70.00th=[15795], 80.00th=[18744], 90.00th=[25822], 95.00th=[28443], 00:11:16.954 | 99.00th=[45876], 99.50th=[46924], 99.90th=[48497], 99.95th=[48497], 00:11:16.955 | 99.99th=[48497] 00:11:16.955 bw ( KiB/s): min=16351, max=16384, per=28.10%, avg=16367.50, stdev=23.33, samples=2 00:11:16.955 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:11:16.955 lat (usec) : 250=0.01%, 1000=0.06% 00:11:16.955 lat (msec) : 2=0.21%, 4=1.23%, 10=11.56%, 20=73.41%, 50=13.36% 00:11:16.955 lat (msec) : 100=0.16% 00:11:16.955 cpu : usr=3.78%, sys=5.38%, ctx=299, majf=0, minf=1 00:11:16.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:16.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.955 issued rwts: total=4096,4136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.955 job2: (groupid=0, jobs=1): err= 0: pid=2883325: Mon Nov 18 18:18:14 2024 00:11:16.955 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:11:16.955 slat (usec): min=2, max=17693, avg=136.72, stdev=857.31 00:11:16.955 clat (usec): min=5930, max=47350, avg=16872.43, stdev=4255.73 00:11:16.955 lat (usec): min=5934, max=47358, avg=17009.16, stdev=4302.48 00:11:16.955 clat percentiles (usec): 00:11:16.955 | 1.00th=[ 7635], 5.00th=[12518], 10.00th=[13829], 20.00th=[14615], 00:11:16.955 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16188], 60.00th=[16909], 00:11:16.955 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19792], 95.00th=[22152], 00:11:16.955 | 99.00th=[36439], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:11:16.955 | 99.99th=[47449] 00:11:16.955 write: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.4MiB/1015msec); 0 zone resets 00:11:16.955 slat (usec): min=4, max=48286, avg=158.63, stdev=1206.15 00:11:16.955 clat (usec): min=952, max=66233, avg=19164.36, stdev=6141.82 00:11:16.955 lat (usec): min=960, max=102433, avg=19322.99, stdev=6341.64 00:11:16.955 clat percentiles (usec): 00:11:16.955 | 1.00th=[ 6718], 5.00th=[12125], 10.00th=[13566], 20.00th=[15664], 00:11:16.955 | 30.00th=[16319], 40.00th=[17171], 50.00th=[17957], 60.00th=[18482], 00:11:16.955 | 70.00th=[19792], 80.00th=[21627], 90.00th=[28967], 95.00th=[32113], 00:11:16.955 | 99.00th=[39584], 99.50th=[41681], 99.90th=[45351], 99.95th=[47449], 00:11:16.955 | 99.99th=[66323] 00:11:16.955 bw ( KiB/s): min=12288, max=14128, per=22.67%, avg=13208.00, stdev=1301.08, samples=2 00:11:16.955 iops : min= 3072, max= 3532, avg=3302.00, stdev=325.27, samples=2 00:11:16.955 lat (usec) : 1000=0.11% 00:11:16.955 lat (msec) : 10=1.89%, 20=78.68%, 50=19.30%, 100=0.02% 00:11:16.955 cpu : usr=2.96%, sys=5.72%, ctx=374, majf=0, minf=1 00:11:16.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:16.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.955 issued rwts: total=3072,3430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.955 job3: (groupid=0, jobs=1): err= 0: pid=2883326: Mon Nov 18 18:18:14 2024 00:11:16.955 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:16.955 slat (usec): min=2, max=33458, avg=196.14, stdev=1325.82 00:11:16.955 clat (usec): min=10799, max=81135, avg=25907.12, stdev=13697.34 00:11:16.955 lat (usec): min=10807, max=81138, avg=26103.26, stdev=13759.48 00:11:16.955 clat percentiles (usec): 00:11:16.955 | 1.00th=[13960], 5.00th=[15270], 10.00th=[15664], 20.00th=[17433], 00:11:16.955 | 30.00th=[19268], 40.00th=[21627], 50.00th=[22414], 60.00th=[23987], 00:11:16.955 | 70.00th=[26870], 80.00th=[28443], 90.00th=[34341], 95.00th=[68682], 00:11:16.955 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:11:16.955 | 99.99th=[81265] 00:11:16.955 write: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1005msec); 0 zone resets 00:11:16.955 slat (usec): min=4, max=25362, avg=184.25, stdev=1049.45 00:11:16.955 clat (usec): min=2078, max=73243, avg=22049.42, stdev=11577.13 00:11:16.955 lat (usec): min=5928, max=73255, avg=22233.67, stdev=11648.73 00:11:16.955 clat percentiles (usec): 00:11:16.955 | 1.00th=[ 7439], 5.00th=[13173], 10.00th=[14877], 20.00th=[15533], 00:11:16.955 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17957], 60.00th=[18482], 00:11:16.955 | 70.00th=[21627], 80.00th=[26084], 90.00th=[37487], 95.00th=[47973], 00:11:16.955 | 99.00th=[69731], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:11:16.955 | 99.99th=[72877] 00:11:16.955 bw ( KiB/s): min= 8192, max=12288, per=17.58%, avg=10240.00, stdev=2896.31, samples=2 00:11:16.955 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:16.955 lat (msec) : 4=0.02%, 10=1.30%, 20=49.52%, 50=44.10%, 100=5.07% 00:11:16.955 cpu : usr=2.49%, sys=3.78%, ctx=248, majf=0, minf=1 00:11:16.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:16.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.955 issued rwts: total=2560,2608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.955 00:11:16.955 Run status group 0 (all jobs): 00:11:16.955 READ: bw=53.5MiB/s (56.0MB/s), 9.95MiB/s-16.2MiB/s (10.4MB/s-17.0MB/s), io=54.3MiB (56.9MB), run=1003-1015msec 00:11:16.955 WRITE: bw=56.9MiB/s (59.7MB/s), 10.1MiB/s-17.9MiB/s (10.6MB/s-18.8MB/s), io=57.7MiB (60.5MB), run=1003-1015msec 00:11:16.955 00:11:16.955 Disk stats (read/write): 00:11:16.955 nvme0n1: ios=3633/3895, merge=0/0, ticks=17820/19281, in_queue=37101, util=89.98% 00:11:16.955 nvme0n2: ios=3109/3584, merge=0/0, ticks=41112/42265, in_queue=83377, util=97.16% 00:11:16.955 nvme0n3: ios=2580/2983, merge=0/0, ticks=23957/31285, in_queue=55242, util=95.43% 00:11:16.955 nvme0n4: ios=2212/2560, merge=0/0, ticks=20279/24117, in_queue=44396, util=98.43% 00:11:16.955 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:16.955 [global] 00:11:16.955 thread=1 00:11:16.955 invalidate=1 00:11:16.955 rw=randwrite 00:11:16.955 time_based=1 00:11:16.955 runtime=1 00:11:16.955 ioengine=libaio 00:11:16.955 direct=1 00:11:16.955 bs=4096 00:11:16.955 iodepth=128 00:11:16.955 norandommap=0 00:11:16.955 numjobs=1 00:11:16.955 00:11:16.955 verify_dump=1 00:11:16.955 verify_backlog=512 00:11:16.955 verify_state_save=0 00:11:16.955 do_verify=1 00:11:16.955 verify=crc32c-intel 00:11:16.955 [job0] 00:11:16.955 filename=/dev/nvme0n1 00:11:16.955 [job1] 00:11:16.955 filename=/dev/nvme0n2 00:11:16.955 [job2] 00:11:16.955 filename=/dev/nvme0n3 00:11:16.955 [job3] 00:11:16.955 filename=/dev/nvme0n4 00:11:16.955 Could not set queue depth (nvme0n1) 00:11:16.955 Could not set queue depth (nvme0n2) 00:11:16.955 Could not set queue depth (nvme0n3) 00:11:16.955 Could not set queue depth (nvme0n4) 00:11:16.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.955 fio-3.35 00:11:16.955 Starting 4 threads 00:11:18.331 00:11:18.331 job0: (groupid=0, jobs=1): err= 0: pid=2883552: Mon Nov 18 18:18:16 2024 00:11:18.331 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:11:18.331 slat (usec): min=2, max=12299, avg=118.36, stdev=732.33 00:11:18.331 clat (usec): min=1097, max=59477, avg=15398.07, stdev=4791.38 00:11:18.331 lat (usec): min=4342, max=59484, avg=15516.43, stdev=4823.77 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 5342], 5.00th=[11600], 10.00th=[12256], 20.00th=[12649], 00:11:18.331 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14877], 00:11:18.331 | 70.00th=[16188], 80.00th=[17695], 90.00th=[20317], 95.00th=[21365], 00:11:18.331 | 99.00th=[25297], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:11:18.331 | 99.99th=[59507] 00:11:18.331 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:18.331 slat (usec): min=3, max=17314, avg=117.64, stdev=653.40 00:11:18.331 clat (usec): min=1700, max=50257, avg=15856.02, stdev=6766.12 00:11:18.331 lat (usec): min=1717, max=50266, avg=15973.66, stdev=6820.55 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 5735], 5.00th=[ 8848], 10.00th=[11338], 20.00th=[12518], 00:11:18.331 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14484], 60.00th=[14746], 00:11:18.331 | 70.00th=[15139], 80.00th=[17171], 90.00th=[23462], 95.00th=[30802], 00:11:18.331 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:11:18.331 | 99.99th=[50070] 00:11:18.331 bw ( KiB/s): min=15816, max=16918, per=28.46%, avg=16367.00, stdev=779.23, samples=2 00:11:18.331 iops : min= 3954, max= 4229, avg=4091.50, stdev=194.45, samples=2 00:11:18.331 lat (msec) : 2=0.04%, 4=0.12%, 10=4.96%, 20=83.66%, 50=10.71% 00:11:18.331 lat (msec) : 100=0.52% 00:11:18.331 cpu : usr=5.38%, sys=7.78%, ctx=451, majf=0, minf=1 00:11:18.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:18.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.331 issued rwts: total=4048,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.331 job1: (groupid=0, jobs=1): err= 0: pid=2883553: Mon Nov 18 18:18:16 2024 00:11:18.331 read: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:11:18.331 slat (nsec): min=1923, max=10819k, avg=123877.41, stdev=691523.86 00:11:18.331 clat (usec): min=2076, max=30489, avg=15775.39, stdev=4807.27 00:11:18.331 lat (usec): min=2084, max=30753, avg=15899.27, stdev=4847.52 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 5538], 5.00th=[10028], 10.00th=[12125], 20.00th=[12780], 00:11:18.331 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:11:18.331 | 70.00th=[15795], 80.00th=[17695], 90.00th=[23725], 95.00th=[27395], 00:11:18.331 | 99.00th=[27657], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:11:18.331 | 99.99th=[30540] 00:11:18.331 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:18.331 slat (usec): min=2, max=10607, avg=113.54, stdev=656.82 00:11:18.331 clat (usec): min=346, max=27625, avg=15429.33, stdev=4466.94 00:11:18.331 lat (usec): min=379, max=27664, avg=15542.87, stdev=4487.62 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 3654], 5.00th=[10552], 10.00th=[11469], 20.00th=[12518], 00:11:18.331 | 30.00th=[13304], 40.00th=[14091], 50.00th=[14615], 60.00th=[15139], 00:11:18.331 | 70.00th=[15664], 80.00th=[16909], 90.00th=[23462], 95.00th=[26870], 00:11:18.331 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:11:18.331 | 99.99th=[27657] 00:11:18.331 bw ( KiB/s): min=13912, max=18856, per=28.49%, avg=16384.00, stdev=3495.94, samples=2 00:11:18.331 iops : min= 3478, max= 4714, avg=4096.00, stdev=873.98, samples=2 00:11:18.331 lat (usec) : 500=0.01% 00:11:18.331 lat (msec) : 2=0.33%, 4=0.23%, 10=4.14%, 20=81.47%, 50=13.81% 00:11:18.331 cpu : usr=3.39%, sys=4.49%, ctx=422, majf=0, minf=1 00:11:18.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:18.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.331 issued rwts: total=4041,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.331 job2: (groupid=0, jobs=1): err= 0: pid=2883554: Mon Nov 18 18:18:16 2024 00:11:18.331 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:11:18.331 slat (usec): min=2, max=13499, avg=171.22, stdev=960.03 00:11:18.331 clat (usec): min=9427, max=81698, avg=21976.87, stdev=8736.32 00:11:18.331 lat (usec): min=9439, max=88006, avg=22148.09, stdev=8767.72 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[11469], 5.00th=[13960], 10.00th=[15664], 20.00th=[17433], 00:11:18.331 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[20317], 00:11:18.331 | 70.00th=[22938], 80.00th=[27395], 90.00th=[27657], 95.00th=[35390], 00:11:18.331 | 99.00th=[64750], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:11:18.331 | 99.99th=[81265] 00:11:18.331 write: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1003msec); 0 zone resets 00:11:18.331 slat (usec): min=3, max=35395, avg=206.07, stdev=1396.48 00:11:18.331 clat (usec): min=2800, max=95103, avg=26790.99, stdev=15541.13 00:11:18.331 lat (usec): min=2820, max=95146, avg=26997.06, stdev=15649.12 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 8455], 5.00th=[12780], 10.00th=[15926], 20.00th=[17695], 00:11:18.331 | 30.00th=[18482], 40.00th=[19006], 50.00th=[20055], 60.00th=[23725], 00:11:18.331 | 70.00th=[27132], 80.00th=[32900], 90.00th=[46400], 95.00th=[64750], 00:11:18.331 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[84411], 00:11:18.331 | 99.99th=[94897] 00:11:18.331 bw ( KiB/s): min= 9808, max=10672, per=17.80%, avg=10240.00, stdev=610.94, samples=2 00:11:18.331 iops : min= 2452, max= 2668, avg=2560.00, stdev=152.74, samples=2 00:11:18.331 lat (msec) : 4=0.39%, 10=1.72%, 20=51.74%, 50=40.43%, 100=5.72% 00:11:18.331 cpu : usr=2.30%, sys=5.29%, ctx=241, majf=0, minf=1 00:11:18.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:18.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.331 issued rwts: total=2560,2612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.331 job3: (groupid=0, jobs=1): err= 0: pid=2883555: Mon Nov 18 18:18:16 2024 00:11:18.331 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:11:18.331 slat (usec): min=3, max=15425, avg=148.87, stdev=1018.72 00:11:18.331 clat (usec): min=5896, max=40022, avg=18716.85, stdev=5489.59 00:11:18.331 lat (usec): min=5904, max=40034, avg=18865.72, stdev=5538.61 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 6849], 5.00th=[12387], 10.00th=[13435], 20.00th=[14746], 00:11:18.331 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16909], 60.00th=[17957], 00:11:18.331 | 70.00th=[20055], 80.00th=[23200], 90.00th=[26346], 95.00th=[28443], 00:11:18.331 | 99.00th=[34866], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:11:18.331 | 99.99th=[40109] 00:11:18.331 write: IOPS=3723, BW=14.5MiB/s (15.3MB/s)(14.8MiB/1014msec); 0 zone resets 00:11:18.331 slat (usec): min=4, max=17476, avg=112.03, stdev=777.64 00:11:18.331 clat (usec): min=1381, max=44286, avg=16216.51, stdev=5350.66 00:11:18.331 lat (usec): min=3528, max=44295, avg=16328.54, stdev=5401.32 00:11:18.331 clat percentiles (usec): 00:11:18.331 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 8586], 20.00th=[12518], 00:11:18.331 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:11:18.331 | 70.00th=[17433], 80.00th=[19792], 90.00th=[21103], 95.00th=[23725], 00:11:18.331 | 99.00th=[37487], 99.50th=[40633], 99.90th=[44303], 99.95th=[44303], 00:11:18.331 | 99.99th=[44303] 00:11:18.331 bw ( KiB/s): min=14168, max=15024, per=25.38%, avg=14596.00, stdev=605.28, samples=2 00:11:18.331 iops : min= 3542, max= 3756, avg=3649.00, stdev=151.32, samples=2 00:11:18.331 lat (msec) : 2=0.01%, 4=0.08%, 10=7.77%, 20=67.89%, 50=24.24% 00:11:18.331 cpu : usr=5.23%, sys=7.21%, ctx=395, majf=0, minf=1 00:11:18.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:18.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.331 issued rwts: total=3584,3776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.331 00:11:18.331 Run status group 0 (all jobs): 00:11:18.331 READ: bw=54.8MiB/s (57.5MB/s), 9.97MiB/s-15.7MiB/s (10.5MB/s-16.5MB/s), io=55.6MiB (58.3MB), run=1003-1014msec 00:11:18.331 WRITE: bw=56.2MiB/s (58.9MB/s), 10.2MiB/s-15.9MiB/s (10.7MB/s-16.7MB/s), io=57.0MiB (59.7MB), run=1003-1014msec 00:11:18.331 00:11:18.331 Disk stats (read/write): 00:11:18.331 nvme0n1: ios=3247/3584, merge=0/0, ticks=35386/38390, in_queue=73776, util=85.97% 00:11:18.331 nvme0n2: ios=3599/3816, merge=0/0, ticks=26098/24531, in_queue=50629, util=88.83% 00:11:18.331 nvme0n3: ios=2048/2432, merge=0/0, ticks=14197/22194, in_queue=36391, util=87.70% 00:11:18.331 nvme0n4: ios=2894/3072, merge=0/0, ticks=47906/44159, in_queue=92065, util=96.01% 00:11:18.331 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:18.331 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2883696 00:11:18.331 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:18.331 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:18.331 [global] 00:11:18.331 thread=1 00:11:18.331 invalidate=1 00:11:18.331 rw=read 00:11:18.331 time_based=1 00:11:18.331 runtime=10 00:11:18.331 ioengine=libaio 00:11:18.331 direct=1 00:11:18.331 bs=4096 00:11:18.331 iodepth=1 00:11:18.331 norandommap=1 00:11:18.331 numjobs=1 00:11:18.331 00:11:18.331 [job0] 00:11:18.331 filename=/dev/nvme0n1 00:11:18.331 [job1] 00:11:18.331 filename=/dev/nvme0n2 00:11:18.331 [job2] 00:11:18.331 filename=/dev/nvme0n3 00:11:18.331 [job3] 00:11:18.331 filename=/dev/nvme0n4 00:11:18.331 Could not set queue depth (nvme0n1) 00:11:18.331 Could not set queue depth (nvme0n2) 00:11:18.331 Could not set queue depth (nvme0n3) 00:11:18.331 Could not set queue depth (nvme0n4) 00:11:18.331 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.331 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.331 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.331 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.331 fio-3.35 00:11:18.331 Starting 4 threads 00:11:21.613 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:21.613 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:21.613 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=22876160, buflen=4096 00:11:21.613 fio: pid=2883793, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.871 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.871 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:21.871 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=29794304, buflen=4096 00:11:21.871 fio: pid=2883792, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.129 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31408128, buflen=4096 00:11:22.129 fio: pid=2883789, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.129 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.129 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:22.696 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=10391552, buflen=4096 00:11:22.696 fio: pid=2883790, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:22.696 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.696 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:22.696 00:11:22.696 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2883789: Mon Nov 18 18:18:20 2024 00:11:22.696 read: IOPS=2143, BW=8572KiB/s (8778kB/s)(30.0MiB/3578msec) 00:11:22.696 slat (usec): min=4, max=11713, avg=20.76, stdev=166.46 00:11:22.696 clat (usec): min=219, max=40993, avg=438.59, stdev=1229.30 00:11:22.696 lat (usec): min=225, max=41071, avg=459.35, stdev=1241.06 00:11:22.696 clat percentiles (usec): 00:11:22.696 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 285], 00:11:22.696 | 30.00th=[ 338], 40.00th=[ 379], 50.00th=[ 408], 60.00th=[ 433], 00:11:22.696 | 70.00th=[ 461], 80.00th=[ 494], 90.00th=[ 537], 95.00th=[ 570], 00:11:22.696 | 99.00th=[ 627], 99.50th=[ 701], 99.90th=[ 3326], 99.95th=[41157], 00:11:22.696 | 99.99th=[41157] 00:11:22.696 bw ( KiB/s): min= 7872, max=11592, per=39.18%, avg=9121.33, stdev=1364.90, samples=6 00:11:22.696 iops : min= 1968, max= 2898, avg=2280.33, stdev=341.22, samples=6 00:11:22.696 lat (usec) : 250=7.69%, 500=73.70%, 750=18.20%, 1000=0.29% 00:11:22.696 lat (msec) : 4=0.01%, 50=0.09% 00:11:22.696 cpu : usr=1.43%, sys=4.58%, ctx=7674, majf=0, minf=1 00:11:22.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 issued rwts: total=7669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.696 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2883790: Mon Nov 18 18:18:20 2024 00:11:22.696 read: IOPS=640, BW=2561KiB/s (2622kB/s)(9.91MiB/3963msec) 00:11:22.696 slat (usec): min=4, max=18934, avg=31.63, stdev=442.11 00:11:22.696 clat (usec): min=210, max=96440, avg=1525.84, stdev=6808.31 00:11:22.696 lat (usec): min=215, max=96455, avg=1554.44, stdev=6884.84 00:11:22.696 clat percentiles (usec): 00:11:22.696 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 330], 00:11:22.696 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 478], 00:11:22.696 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 611], 00:11:22.696 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[79168], 00:11:22.696 | 99.99th=[95945] 00:11:22.696 bw ( KiB/s): min= 96, max= 7744, per=11.98%, avg=2789.71, stdev=3059.75, samples=7 00:11:22.696 iops : min= 24, max= 1936, avg=697.43, stdev=764.94, samples=7 00:11:22.696 lat (usec) : 250=7.57%, 500=63.75%, 750=25.69%, 1000=0.28% 00:11:22.696 lat (msec) : 2=0.04%, 4=0.04%, 50=2.52%, 100=0.08% 00:11:22.696 cpu : usr=0.48%, sys=1.49%, ctx=2542, majf=0, minf=2 00:11:22.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.696 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2883792: Mon Nov 18 18:18:20 2024 00:11:22.696 read: IOPS=2205, BW=8820KiB/s (9031kB/s)(28.4MiB/3299msec) 00:11:22.696 slat (nsec): min=4432, max=70903, avg=19008.50, stdev=10220.67 00:11:22.696 clat (usec): min=223, max=41974, avg=426.95, stdev=1271.11 00:11:22.696 lat (usec): min=229, max=41992, avg=445.96, stdev=1271.31 00:11:22.696 clat percentiles (usec): 00:11:22.696 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 273], 00:11:22.696 | 30.00th=[ 285], 40.00th=[ 314], 50.00th=[ 359], 60.00th=[ 416], 00:11:22.696 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 586], 00:11:22.696 | 99.00th=[ 734], 99.50th=[ 840], 99.90th=[ 1188], 99.95th=[41157], 00:11:22.696 | 99.99th=[42206] 00:11:22.696 bw ( KiB/s): min= 7760, max=10304, per=40.89%, avg=9520.00, stdev=901.11, samples=6 00:11:22.696 iops : min= 1940, max= 2576, avg=2380.00, stdev=225.28, samples=6 00:11:22.696 lat (usec) : 250=4.43%, 500=74.63%, 750=19.99%, 1000=0.82% 00:11:22.696 lat (msec) : 2=0.03%, 50=0.10% 00:11:22.696 cpu : usr=1.67%, sys=4.82%, ctx=7276, majf=0, minf=2 00:11:22.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.696 issued rwts: total=7275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.696 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2883793: Mon Nov 18 18:18:20 2024 00:11:22.696 read: IOPS=1896, BW=7586KiB/s (7768kB/s)(21.8MiB/2945msec) 00:11:22.696 slat (nsec): min=4574, max=77221, avg=19592.35, stdev=10714.17 00:11:22.696 clat (usec): min=241, max=41318, avg=498.86, stdev=1537.72 00:11:22.696 lat (usec): min=246, max=41336, avg=518.45, stdev=1538.01 00:11:22.696 clat percentiles (usec): 00:11:22.697 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 351], 00:11:22.697 | 30.00th=[ 388], 40.00th=[ 416], 50.00th=[ 441], 60.00th=[ 465], 00:11:22.697 | 70.00th=[ 486], 80.00th=[ 519], 90.00th=[ 570], 95.00th=[ 611], 00:11:22.697 | 99.00th=[ 783], 99.50th=[ 848], 99.90th=[40633], 99.95th=[41157], 00:11:22.697 | 99.99th=[41157] 00:11:22.697 bw ( KiB/s): min= 3608, max= 9464, per=32.84%, avg=7644.80, stdev=2357.03, samples=5 00:11:22.697 iops : min= 902, max= 2366, avg=1911.20, stdev=589.26, samples=5 00:11:22.697 lat (usec) : 250=0.16%, 500=74.35%, 750=24.33%, 1000=1.00% 00:11:22.697 lat (msec) : 50=0.14% 00:11:22.697 cpu : usr=1.43%, sys=4.35%, ctx=5586, majf=0, minf=2 00:11:22.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.697 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.697 issued rwts: total=5586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.697 00:11:22.697 Run status group 0 (all jobs): 00:11:22.697 READ: bw=22.7MiB/s (23.8MB/s), 2561KiB/s-8820KiB/s (2622kB/s-9031kB/s), io=90.1MiB (94.5MB), run=2945-3963msec 00:11:22.697 00:11:22.697 Disk stats (read/write): 00:11:22.697 nvme0n1: ios=7704/0, merge=0/0, ticks=3238/0, in_queue=3238, util=99.51% 00:11:22.697 nvme0n2: ios=2575/0, merge=0/0, ticks=4129/0, in_queue=4129, util=99.37% 00:11:22.697 nvme0n3: ios=7313/0, merge=0/0, ticks=3122/0, in_queue=3122, util=100.00% 00:11:22.697 nvme0n4: ios=5461/0, merge=0/0, ticks=2617/0, in_queue=2617, util=96.72% 00:11:22.955 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.955 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:23.213 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.213 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:23.472 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.472 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:24.038 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.038 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:24.296 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:24.296 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2883696 00:11:24.296 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:24.296 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:25.229 nvmf hotplug test: fio failed as expected 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.229 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.487 rmmod nvme_tcp 00:11:25.487 rmmod nvme_fabrics 00:11:25.487 rmmod nvme_keyring 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2881011 ']' 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2881011 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2881011 ']' 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2881011 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881011 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881011' 00:11:25.487 killing process with pid 2881011 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2881011 00:11:25.487 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2881011 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.858 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.834 00:11:28.834 real 0m27.378s 00:11:28.834 user 1m35.475s 00:11:28.834 sys 0m7.533s 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.834 ************************************ 00:11:28.834 END TEST nvmf_fio_target 00:11:28.834 ************************************ 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.834 ************************************ 00:11:28.834 START TEST nvmf_bdevio 00:11:28.834 ************************************ 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:28.834 * Looking for test storage... 00:11:28.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.834 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.834 --rc genhtml_branch_coverage=1 00:11:28.834 --rc genhtml_function_coverage=1 00:11:28.834 --rc genhtml_legend=1 00:11:28.834 --rc geninfo_all_blocks=1 00:11:28.834 --rc geninfo_unexecuted_blocks=1 00:11:28.834 00:11:28.834 ' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.834 --rc genhtml_branch_coverage=1 00:11:28.834 --rc genhtml_function_coverage=1 00:11:28.834 --rc genhtml_legend=1 00:11:28.834 --rc geninfo_all_blocks=1 00:11:28.834 --rc geninfo_unexecuted_blocks=1 00:11:28.834 00:11:28.834 ' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.834 --rc genhtml_branch_coverage=1 00:11:28.834 --rc genhtml_function_coverage=1 00:11:28.834 --rc genhtml_legend=1 00:11:28.834 --rc geninfo_all_blocks=1 00:11:28.834 --rc geninfo_unexecuted_blocks=1 00:11:28.834 00:11:28.834 ' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.834 --rc genhtml_branch_coverage=1 00:11:28.834 --rc genhtml_function_coverage=1 00:11:28.834 --rc genhtml_legend=1 00:11:28.834 --rc geninfo_all_blocks=1 00:11:28.834 --rc geninfo_unexecuted_blocks=1 00:11:28.834 00:11:28.834 ' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.834 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.835 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.368 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:11:31.369 00:11:31.369 --- 10.0.0.2 ping statistics --- 00:11:31.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.369 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:31.369 00:11:31.369 --- 10.0.0.1 ping statistics --- 00:11:31.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.369 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2886698 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2886698 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2886698 ']' 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.369 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.369 [2024-11-18 18:18:29.579461] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:31.369 [2024-11-18 18:18:29.579637] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.628 [2024-11-18 18:18:29.731271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.628 [2024-11-18 18:18:29.877760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.628 [2024-11-18 18:18:29.877848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.628 [2024-11-18 18:18:29.877874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.628 [2024-11-18 18:18:29.877898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.628 [2024-11-18 18:18:29.877919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.628 [2024-11-18 18:18:29.880846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:31.628 [2024-11-18 18:18:29.880902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:31.628 [2024-11-18 18:18:29.880952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.628 [2024-11-18 18:18:29.880959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.561 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.562 [2024-11-18 18:18:30.579227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.562 Malloc0 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.562 [2024-11-18 18:18:30.700642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:32.562 { 00:11:32.562 "params": { 00:11:32.562 "name": "Nvme$subsystem", 00:11:32.562 "trtype": "$TEST_TRANSPORT", 00:11:32.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.562 "adrfam": "ipv4", 00:11:32.562 "trsvcid": "$NVMF_PORT", 00:11:32.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.562 "hdgst": ${hdgst:-false}, 00:11:32.562 "ddgst": ${ddgst:-false} 00:11:32.562 }, 00:11:32.562 "method": "bdev_nvme_attach_controller" 00:11:32.562 } 00:11:32.562 EOF 00:11:32.562 )") 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:32.562 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:32.562 "params": { 00:11:32.562 "name": "Nvme1", 00:11:32.562 "trtype": "tcp", 00:11:32.562 "traddr": "10.0.0.2", 00:11:32.562 "adrfam": "ipv4", 00:11:32.562 "trsvcid": "4420", 00:11:32.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:32.562 "hdgst": false, 00:11:32.562 "ddgst": false 00:11:32.562 }, 00:11:32.562 "method": "bdev_nvme_attach_controller" 00:11:32.562 }' 00:11:32.562 [2024-11-18 18:18:30.783359] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:32.562 [2024-11-18 18:18:30.783501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886861 ] 00:11:32.820 [2024-11-18 18:18:30.922223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.820 [2024-11-18 18:18:31.057431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.820 [2024-11-18 18:18:31.057481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.820 [2024-11-18 18:18:31.057486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.385 I/O targets: 00:11:33.385 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.385 00:11:33.385 00:11:33.385 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.385 http://cunit.sourceforge.net/ 00:11:33.385 00:11:33.385 00:11:33.385 Suite: bdevio tests on: Nvme1n1 00:11:33.385 Test: blockdev write read block ...passed 00:11:33.644 Test: blockdev write zeroes read block ...passed 00:11:33.644 Test: blockdev write zeroes read no split ...passed 00:11:33.644 Test: blockdev write zeroes read split ...passed 00:11:33.644 Test: blockdev write zeroes read split partial ...passed 00:11:33.644 Test: blockdev reset ...[2024-11-18 18:18:31.809790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:33.644 [2024-11-18 18:18:31.810000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:33.644 [2024-11-18 18:18:31.864992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:33.644 passed 00:11:33.644 Test: blockdev write read 8 blocks ...passed 00:11:33.644 Test: blockdev write read size > 128k ...passed 00:11:33.644 Test: blockdev write read invalid size ...passed 00:11:33.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.644 Test: blockdev write read max offset ...passed 00:11:33.902 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.902 Test: blockdev writev readv 8 blocks ...passed 00:11:33.902 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.902 Test: blockdev writev readv block ...passed 00:11:33.902 Test: blockdev writev readv size > 128k ...passed 00:11:33.902 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.902 Test: blockdev comparev and writev ...[2024-11-18 18:18:32.121721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.121798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.121844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.121873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.122365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.122400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.122443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.122471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.122967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.123001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.123035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.123061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.123526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.123603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.902 [2024-11-18 18:18:32.123637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:33.902 passed 00:11:33.902 Test: blockdev nvme passthru rw ...passed 00:11:33.902 Test: blockdev nvme passthru vendor specific ...[2024-11-18 18:18:32.207015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.902 [2024-11-18 18:18:32.207071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.207341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.902 [2024-11-18 18:18:32.207375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.207575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.902 [2024-11-18 18:18:32.207615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:33.902 [2024-11-18 18:18:32.207816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.902 [2024-11-18 18:18:32.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:33.902 passed 00:11:33.902 Test: blockdev nvme admin passthru ...passed 00:11:34.161 Test: blockdev copy ...passed 00:11:34.161 00:11:34.161 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.161 suites 1 1 n/a 0 0 00:11:34.161 tests 23 23 23 0 0 00:11:34.161 asserts 152 152 152 0 n/a 00:11:34.161 00:11:34.161 Elapsed time = 1.273 seconds 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.097 rmmod nvme_tcp 00:11:35.097 rmmod nvme_fabrics 00:11:35.097 rmmod nvme_keyring 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2886698 ']' 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2886698 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2886698 ']' 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2886698 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2886698 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2886698' 00:11:35.097 killing process with pid 2886698 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2886698 00:11:35.097 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2886698 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:36.471 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.472 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.472 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.374 00:11:38.374 real 0m9.595s 00:11:38.374 user 0m22.848s 00:11:38.374 sys 0m2.589s 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.374 ************************************ 00:11:38.374 END TEST nvmf_bdevio 00:11:38.374 ************************************ 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:38.374 00:11:38.374 real 4m30.732s 00:11:38.374 user 11m52.247s 00:11:38.374 sys 1m9.534s 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.374 ************************************ 00:11:38.374 END TEST nvmf_target_core 00:11:38.374 ************************************ 00:11:38.374 18:18:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:38.374 18:18:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.374 18:18:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.374 18:18:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.374 ************************************ 00:11:38.374 START TEST nvmf_target_extra 00:11:38.374 ************************************ 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:38.374 * Looking for test storage... 00:11:38.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.374 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.634 --rc genhtml_branch_coverage=1 00:11:38.634 --rc genhtml_function_coverage=1 00:11:38.634 --rc genhtml_legend=1 00:11:38.634 --rc geninfo_all_blocks=1 00:11:38.634 --rc geninfo_unexecuted_blocks=1 00:11:38.634 00:11:38.634 ' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.634 --rc genhtml_branch_coverage=1 00:11:38.634 --rc genhtml_function_coverage=1 00:11:38.634 --rc genhtml_legend=1 00:11:38.634 --rc geninfo_all_blocks=1 00:11:38.634 --rc geninfo_unexecuted_blocks=1 00:11:38.634 00:11:38.634 ' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.634 --rc genhtml_branch_coverage=1 00:11:38.634 --rc genhtml_function_coverage=1 00:11:38.634 --rc genhtml_legend=1 00:11:38.634 --rc geninfo_all_blocks=1 00:11:38.634 --rc geninfo_unexecuted_blocks=1 00:11:38.634 00:11:38.634 ' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.634 --rc genhtml_branch_coverage=1 00:11:38.634 --rc genhtml_function_coverage=1 00:11:38.634 --rc genhtml_legend=1 00:11:38.634 --rc geninfo_all_blocks=1 00:11:38.634 --rc geninfo_unexecuted_blocks=1 00:11:38.634 00:11:38.634 ' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.634 ************************************ 00:11:38.634 START TEST nvmf_example 00:11:38.634 ************************************ 00:11:38.634 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:38.634 * Looking for test storage... 00:11:38.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.635 --rc genhtml_branch_coverage=1 00:11:38.635 --rc genhtml_function_coverage=1 00:11:38.635 --rc genhtml_legend=1 00:11:38.635 --rc geninfo_all_blocks=1 00:11:38.635 --rc geninfo_unexecuted_blocks=1 00:11:38.635 00:11:38.635 ' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.635 --rc genhtml_branch_coverage=1 00:11:38.635 --rc genhtml_function_coverage=1 00:11:38.635 --rc genhtml_legend=1 00:11:38.635 --rc geninfo_all_blocks=1 00:11:38.635 --rc geninfo_unexecuted_blocks=1 00:11:38.635 00:11:38.635 ' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.635 --rc genhtml_branch_coverage=1 00:11:38.635 --rc genhtml_function_coverage=1 00:11:38.635 --rc genhtml_legend=1 00:11:38.635 --rc geninfo_all_blocks=1 00:11:38.635 --rc geninfo_unexecuted_blocks=1 00:11:38.635 00:11:38.635 ' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.635 --rc genhtml_branch_coverage=1 00:11:38.635 --rc genhtml_function_coverage=1 00:11:38.635 --rc genhtml_legend=1 00:11:38.635 --rc geninfo_all_blocks=1 00:11:38.635 --rc geninfo_unexecuted_blocks=1 00:11:38.635 00:11:38.635 ' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:38.635 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.636 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:41.166 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.167 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:11:41.167 00:11:41.167 --- 10.0.0.2 ping statistics --- 00:11:41.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.167 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:11:41.167 00:11:41.167 --- 10.0.0.1 ping statistics --- 00:11:41.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.167 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:41.167 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2889297 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2889297 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2889297 ']' 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.168 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.102 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:42.103 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:54.299 Initializing NVMe Controllers 00:11:54.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:54.299 Initialization complete. Launching workers. 00:11:54.299 ======================================================== 00:11:54.299 Latency(us) 00:11:54.299 Device Information : IOPS MiB/s Average min max 00:11:54.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11700.48 45.71 5471.11 1235.31 15736.60 00:11:54.299 ======================================================== 00:11:54.299 Total : 11700.48 45.71 5471.11 1235.31 15736.60 00:11:54.299 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.299 rmmod nvme_tcp 00:11:54.299 rmmod nvme_fabrics 00:11:54.299 rmmod nvme_keyring 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2889297 ']' 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2889297 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2889297 ']' 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2889297 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2889297 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2889297' 00:11:54.299 killing process with pid 2889297 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2889297 00:11:54.299 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2889297 00:11:54.299 nvmf threads initialize successfully 00:11:54.299 bdev subsystem init successfully 00:11:54.299 created a nvmf target service 00:11:54.299 create targets's poll groups done 00:11:54.299 all subsystems of target started 00:11:54.299 nvmf target is running 00:11:54.299 all subsystems of target stopped 00:11:54.299 destroy targets's poll groups done 00:11:54.299 destroyed the nvmf target service 00:11:54.299 bdev subsystem finish successfully 00:11:54.299 nvmf threads destroy successfully 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.299 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.205 00:11:56.205 real 0m17.376s 00:11:56.205 user 0m49.388s 00:11:56.205 sys 0m3.339s 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.205 ************************************ 00:11:56.205 END TEST nvmf_example 00:11:56.205 ************************************ 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.205 ************************************ 00:11:56.205 START TEST nvmf_filesystem 00:11:56.205 ************************************ 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.205 * Looking for test storage... 00:11:56.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.205 --rc genhtml_branch_coverage=1 00:11:56.205 --rc genhtml_function_coverage=1 00:11:56.205 --rc genhtml_legend=1 00:11:56.205 --rc geninfo_all_blocks=1 00:11:56.205 --rc geninfo_unexecuted_blocks=1 00:11:56.205 00:11:56.205 ' 00:11:56.205 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.205 --rc genhtml_branch_coverage=1 00:11:56.205 --rc genhtml_function_coverage=1 00:11:56.205 --rc genhtml_legend=1 00:11:56.205 --rc geninfo_all_blocks=1 00:11:56.205 --rc geninfo_unexecuted_blocks=1 00:11:56.205 00:11:56.205 ' 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.206 --rc genhtml_branch_coverage=1 00:11:56.206 --rc genhtml_function_coverage=1 00:11:56.206 --rc genhtml_legend=1 00:11:56.206 --rc geninfo_all_blocks=1 00:11:56.206 --rc geninfo_unexecuted_blocks=1 00:11:56.206 00:11:56.206 ' 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.206 --rc genhtml_branch_coverage=1 00:11:56.206 --rc genhtml_function_coverage=1 00:11:56.206 --rc genhtml_legend=1 00:11:56.206 --rc geninfo_all_blocks=1 00:11:56.206 --rc geninfo_unexecuted_blocks=1 00:11:56.206 00:11:56.206 ' 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:56.206 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:56.207 #define SPDK_CONFIG_H 00:11:56.207 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:56.207 #define SPDK_CONFIG_APPS 1 00:11:56.207 #define SPDK_CONFIG_ARCH native 00:11:56.207 #define SPDK_CONFIG_ASAN 1 00:11:56.207 #undef SPDK_CONFIG_AVAHI 00:11:56.207 #undef SPDK_CONFIG_CET 00:11:56.207 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:56.207 #define SPDK_CONFIG_COVERAGE 1 00:11:56.207 #define SPDK_CONFIG_CROSS_PREFIX 00:11:56.207 #undef SPDK_CONFIG_CRYPTO 00:11:56.207 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:56.207 #undef SPDK_CONFIG_CUSTOMOCF 00:11:56.207 #undef SPDK_CONFIG_DAOS 00:11:56.207 #define SPDK_CONFIG_DAOS_DIR 00:11:56.207 #define SPDK_CONFIG_DEBUG 1 00:11:56.207 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:56.207 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.207 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:56.207 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:56.207 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:56.207 #undef SPDK_CONFIG_DPDK_UADK 00:11:56.207 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.207 #define SPDK_CONFIG_EXAMPLES 1 00:11:56.207 #undef SPDK_CONFIG_FC 00:11:56.207 #define SPDK_CONFIG_FC_PATH 00:11:56.207 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:56.207 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:56.207 #define SPDK_CONFIG_FSDEV 1 00:11:56.207 #undef SPDK_CONFIG_FUSE 00:11:56.207 #undef SPDK_CONFIG_FUZZER 00:11:56.207 #define SPDK_CONFIG_FUZZER_LIB 00:11:56.207 #undef SPDK_CONFIG_GOLANG 00:11:56.207 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:56.207 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:56.207 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:56.207 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:56.207 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:56.207 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:56.207 #undef SPDK_CONFIG_HAVE_LZ4 00:11:56.207 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:56.207 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:56.207 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:56.207 #define SPDK_CONFIG_IDXD 1 00:11:56.207 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:56.207 #undef SPDK_CONFIG_IPSEC_MB 00:11:56.207 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:56.207 #define SPDK_CONFIG_ISAL 1 00:11:56.207 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:56.207 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:56.207 #define SPDK_CONFIG_LIBDIR 00:11:56.207 #undef SPDK_CONFIG_LTO 00:11:56.207 #define SPDK_CONFIG_MAX_LCORES 128 00:11:56.207 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:56.207 #define SPDK_CONFIG_NVME_CUSE 1 00:11:56.207 #undef SPDK_CONFIG_OCF 00:11:56.207 #define SPDK_CONFIG_OCF_PATH 00:11:56.207 #define SPDK_CONFIG_OPENSSL_PATH 00:11:56.207 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:56.207 #define SPDK_CONFIG_PGO_DIR 00:11:56.207 #undef SPDK_CONFIG_PGO_USE 00:11:56.207 #define SPDK_CONFIG_PREFIX /usr/local 00:11:56.207 #undef SPDK_CONFIG_RAID5F 00:11:56.207 #undef SPDK_CONFIG_RBD 00:11:56.207 #define SPDK_CONFIG_RDMA 1 00:11:56.207 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:56.207 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:56.207 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:56.207 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:56.207 #define SPDK_CONFIG_SHARED 1 00:11:56.207 #undef SPDK_CONFIG_SMA 00:11:56.207 #define SPDK_CONFIG_TESTS 1 00:11:56.207 #undef SPDK_CONFIG_TSAN 00:11:56.207 #define SPDK_CONFIG_UBLK 1 00:11:56.207 #define SPDK_CONFIG_UBSAN 1 00:11:56.207 #undef SPDK_CONFIG_UNIT_TESTS 00:11:56.207 #undef SPDK_CONFIG_URING 00:11:56.207 #define SPDK_CONFIG_URING_PATH 00:11:56.207 #undef SPDK_CONFIG_URING_ZNS 00:11:56.207 #undef SPDK_CONFIG_USDT 00:11:56.207 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:56.207 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:56.207 #undef SPDK_CONFIG_VFIO_USER 00:11:56.207 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:56.207 #define SPDK_CONFIG_VHOST 1 00:11:56.207 #define SPDK_CONFIG_VIRTIO 1 00:11:56.207 #undef SPDK_CONFIG_VTUNE 00:11:56.207 #define SPDK_CONFIG_VTUNE_DIR 00:11:56.207 #define SPDK_CONFIG_WERROR 1 00:11:56.207 #define SPDK_CONFIG_WPDK_DIR 00:11:56.207 #undef SPDK_CONFIG_XNVME 00:11:56.207 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.207 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:56.208 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.209 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2891212 ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2891212 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rIxCLz 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rIxCLz/tests/target /tmp/spdk.rIxCLz 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55038951424 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6949568512 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993850368 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=409600 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.210 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:56.211 * Looking for test storage... 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55038951424 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9164161024 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.211 --rc genhtml_branch_coverage=1 00:11:56.211 --rc genhtml_function_coverage=1 00:11:56.211 --rc genhtml_legend=1 00:11:56.211 --rc geninfo_all_blocks=1 00:11:56.211 --rc geninfo_unexecuted_blocks=1 00:11:56.211 00:11:56.211 ' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.211 --rc genhtml_branch_coverage=1 00:11:56.211 --rc genhtml_function_coverage=1 00:11:56.211 --rc genhtml_legend=1 00:11:56.211 --rc geninfo_all_blocks=1 00:11:56.211 --rc geninfo_unexecuted_blocks=1 00:11:56.211 00:11:56.211 ' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.211 --rc genhtml_branch_coverage=1 00:11:56.211 --rc genhtml_function_coverage=1 00:11:56.211 --rc genhtml_legend=1 00:11:56.211 --rc geninfo_all_blocks=1 00:11:56.211 --rc geninfo_unexecuted_blocks=1 00:11:56.211 00:11:56.211 ' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.211 --rc genhtml_branch_coverage=1 00:11:56.211 --rc genhtml_function_coverage=1 00:11:56.211 --rc genhtml_legend=1 00:11:56.211 --rc geninfo_all_blocks=1 00:11:56.211 --rc geninfo_unexecuted_blocks=1 00:11:56.211 00:11:56.211 ' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.211 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.212 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.212 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.470 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.371 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.372 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:11:58.372 00:11:58.372 --- 10.0.0.2 ping statistics --- 00:11:58.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.372 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:11:58.372 00:11:58.372 --- 10.0.0.1 ping statistics --- 00:11:58.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.372 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.372 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.372 ************************************ 00:11:58.372 START TEST nvmf_filesystem_no_in_capsule 00:11:58.372 ************************************ 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2892851 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2892851 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2892851 ']' 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.631 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.631 [2024-11-18 18:18:56.803697] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:58.631 [2024-11-18 18:18:56.803849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.631 [2024-11-18 18:18:56.947381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.889 [2024-11-18 18:18:57.085350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.889 [2024-11-18 18:18:57.085438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.889 [2024-11-18 18:18:57.085465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.889 [2024-11-18 18:18:57.085490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.889 [2024-11-18 18:18:57.085511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.889 [2024-11-18 18:18:57.088375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.889 [2024-11-18 18:18:57.088443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.889 [2024-11-18 18:18:57.088529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.889 [2024-11-18 18:18:57.088536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.823 [2024-11-18 18:18:57.818644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.823 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.081 Malloc1 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.081 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.081 [2024-11-18 18:18:58.404751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.082 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.339 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.339 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:00.339 { 00:12:00.339 "name": "Malloc1", 00:12:00.339 "aliases": [ 00:12:00.339 "bd429f63-8f3b-4c42-8ed2-9c6284c28190" 00:12:00.339 ], 00:12:00.339 "product_name": "Malloc disk", 00:12:00.339 "block_size": 512, 00:12:00.339 "num_blocks": 1048576, 00:12:00.339 "uuid": "bd429f63-8f3b-4c42-8ed2-9c6284c28190", 00:12:00.339 "assigned_rate_limits": { 00:12:00.339 "rw_ios_per_sec": 0, 00:12:00.340 "rw_mbytes_per_sec": 0, 00:12:00.340 "r_mbytes_per_sec": 0, 00:12:00.340 "w_mbytes_per_sec": 0 00:12:00.340 }, 00:12:00.340 "claimed": true, 00:12:00.340 "claim_type": "exclusive_write", 00:12:00.340 "zoned": false, 00:12:00.340 "supported_io_types": { 00:12:00.340 "read": true, 00:12:00.340 "write": true, 00:12:00.340 "unmap": true, 00:12:00.340 "flush": true, 00:12:00.340 "reset": true, 00:12:00.340 "nvme_admin": false, 00:12:00.340 "nvme_io": false, 00:12:00.340 "nvme_io_md": false, 00:12:00.340 "write_zeroes": true, 00:12:00.340 "zcopy": true, 00:12:00.340 "get_zone_info": false, 00:12:00.340 "zone_management": false, 00:12:00.340 "zone_append": false, 00:12:00.340 "compare": false, 00:12:00.340 "compare_and_write": false, 00:12:00.340 "abort": true, 00:12:00.340 "seek_hole": false, 00:12:00.340 "seek_data": false, 00:12:00.340 "copy": true, 00:12:00.340 "nvme_iov_md": false 00:12:00.340 }, 00:12:00.340 "memory_domains": [ 00:12:00.340 { 00:12:00.340 "dma_device_id": "system", 00:12:00.340 "dma_device_type": 1 00:12:00.340 }, 00:12:00.340 { 00:12:00.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.340 "dma_device_type": 2 00:12:00.340 } 00:12:00.340 ], 00:12:00.340 "driver_specific": {} 00:12:00.340 } 00:12:00.340 ]' 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.340 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.906 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.906 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.906 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.906 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.906 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.432 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.690 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.063 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:05.063 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.063 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.063 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.063 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.063 ************************************ 00:12:05.063 START TEST filesystem_ext4 00:12:05.063 ************************************ 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.063 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.063 Discarding device blocks: 0/522240 done 00:12:05.063 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.063 Filesystem UUID: 64a1dc4d-8641-42f9-ade5-9999febd100e 00:12:05.063 Superblock backups stored on blocks: 00:12:05.063 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.063 00:12:05.063 Allocating group tables: 0/64 done 00:12:05.063 Writing inode tables: 0/64 done 00:12:05.063 Creating journal (8192 blocks): done 00:12:05.063 Writing superblocks and filesystem accounting information: 0/64 done 00:12:05.063 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:05.063 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.379 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2892851 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.637 00:12:10.637 real 0m5.775s 00:12:10.637 user 0m0.009s 00:12:10.637 sys 0m0.058s 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:10.637 ************************************ 00:12:10.637 END TEST filesystem_ext4 00:12:10.637 ************************************ 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.637 ************************************ 00:12:10.637 START TEST filesystem_btrfs 00:12:10.637 ************************************ 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:10.637 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:10.895 btrfs-progs v6.8.1 00:12:10.895 See https://btrfs.readthedocs.io for more information. 00:12:10.895 00:12:10.895 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:10.895 NOTE: several default settings have changed in version 5.15, please make sure 00:12:10.895 this does not affect your deployments: 00:12:10.895 - DUP for metadata (-m dup) 00:12:10.895 - enabled no-holes (-O no-holes) 00:12:10.895 - enabled free-space-tree (-R free-space-tree) 00:12:10.895 00:12:10.895 Label: (null) 00:12:10.895 UUID: f1b94ccf-c395-4f42-aad8-973a2ed8fa53 00:12:10.895 Node size: 16384 00:12:10.895 Sector size: 4096 (CPU page size: 4096) 00:12:10.895 Filesystem size: 510.00MiB 00:12:10.895 Block group profiles: 00:12:10.895 Data: single 8.00MiB 00:12:10.895 Metadata: DUP 32.00MiB 00:12:10.895 System: DUP 8.00MiB 00:12:10.895 SSD detected: yes 00:12:10.895 Zoned device: no 00:12:10.895 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:10.895 Checksum: crc32c 00:12:10.895 Number of devices: 1 00:12:10.895 Devices: 00:12:10.895 ID SIZE PATH 00:12:10.895 1 510.00MiB /dev/nvme0n1p1 00:12:10.895 00:12:10.895 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.895 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.152 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.152 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2892851 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.153 00:12:11.153 real 0m0.523s 00:12:11.153 user 0m0.027s 00:12:11.153 sys 0m0.094s 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.153 ************************************ 00:12:11.153 END TEST filesystem_btrfs 00:12:11.153 ************************************ 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.153 ************************************ 00:12:11.153 START TEST filesystem_xfs 00:12:11.153 ************************************ 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:11.153 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:11.411 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:11.411 = sectsz=512 attr=2, projid32bit=1 00:12:11.411 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:11.411 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:11.411 data = bsize=4096 blocks=130560, imaxpct=25 00:12:11.411 = sunit=0 swidth=0 blks 00:12:11.411 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:11.411 log =internal log bsize=4096 blocks=16384, version=2 00:12:11.411 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:11.411 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:11.976 Discarding blocks...Done. 00:12:11.977 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:11.977 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2892851 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.874 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.874 00:12:13.874 real 0m2.607s 00:12:13.874 user 0m0.014s 00:12:13.874 sys 0m0.064s 00:12:13.874 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.875 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.875 ************************************ 00:12:13.875 END TEST filesystem_xfs 00:12:13.875 ************************************ 00:12:13.875 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2892851 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2892851 ']' 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2892851 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.133 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892851 00:12:14.391 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.391 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.391 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892851' 00:12:14.391 killing process with pid 2892851 00:12:14.391 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2892851 00:12:14.391 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2892851 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:16.919 00:12:16.919 real 0m18.169s 00:12:16.919 user 1m8.704s 00:12:16.919 sys 0m2.258s 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.919 ************************************ 00:12:16.919 END TEST nvmf_filesystem_no_in_capsule 00:12:16.919 ************************************ 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.919 ************************************ 00:12:16.919 START TEST nvmf_filesystem_in_capsule 00:12:16.919 ************************************ 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2895207 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2895207 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2895207 ']' 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.919 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.920 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.920 [2024-11-18 18:19:15.028479] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:16.920 [2024-11-18 18:19:15.028665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.920 [2024-11-18 18:19:15.178821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.178 [2024-11-18 18:19:15.321538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.178 [2024-11-18 18:19:15.321635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.178 [2024-11-18 18:19:15.321663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.178 [2024-11-18 18:19:15.321689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.178 [2024-11-18 18:19:15.321709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.178 [2024-11-18 18:19:15.324537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.178 [2024-11-18 18:19:15.324621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.178 [2024-11-18 18:19:15.324710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.178 [2024-11-18 18:19:15.324714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.746 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 [2024-11-18 18:19:16.004143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.746 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.746 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:17.746 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.746 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 Malloc1 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 [2024-11-18 18:19:16.614870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:18.312 { 00:12:18.312 "name": "Malloc1", 00:12:18.312 "aliases": [ 00:12:18.312 "cc4ef8d7-b8d9-4ecf-9320-fac4d7d247e3" 00:12:18.312 ], 00:12:18.312 "product_name": "Malloc disk", 00:12:18.312 "block_size": 512, 00:12:18.312 "num_blocks": 1048576, 00:12:18.312 "uuid": "cc4ef8d7-b8d9-4ecf-9320-fac4d7d247e3", 00:12:18.312 "assigned_rate_limits": { 00:12:18.312 "rw_ios_per_sec": 0, 00:12:18.312 "rw_mbytes_per_sec": 0, 00:12:18.312 "r_mbytes_per_sec": 0, 00:12:18.312 "w_mbytes_per_sec": 0 00:12:18.312 }, 00:12:18.312 "claimed": true, 00:12:18.312 "claim_type": "exclusive_write", 00:12:18.312 "zoned": false, 00:12:18.312 "supported_io_types": { 00:12:18.312 "read": true, 00:12:18.312 "write": true, 00:12:18.312 "unmap": true, 00:12:18.312 "flush": true, 00:12:18.312 "reset": true, 00:12:18.312 "nvme_admin": false, 00:12:18.312 "nvme_io": false, 00:12:18.312 "nvme_io_md": false, 00:12:18.312 "write_zeroes": true, 00:12:18.312 "zcopy": true, 00:12:18.312 "get_zone_info": false, 00:12:18.312 "zone_management": false, 00:12:18.312 "zone_append": false, 00:12:18.312 "compare": false, 00:12:18.312 "compare_and_write": false, 00:12:18.312 "abort": true, 00:12:18.312 "seek_hole": false, 00:12:18.312 "seek_data": false, 00:12:18.312 "copy": true, 00:12:18.312 "nvme_iov_md": false 00:12:18.312 }, 00:12:18.312 "memory_domains": [ 00:12:18.312 { 00:12:18.312 "dma_device_id": "system", 00:12:18.312 "dma_device_type": 1 00:12:18.312 }, 00:12:18.312 { 00:12:18.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.312 "dma_device_type": 2 00:12:18.312 } 00:12:18.312 ], 00:12:18.312 "driver_specific": {} 00:12:18.312 } 00:12:18.312 ]' 00:12:18.312 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:18.570 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.136 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.136 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.136 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.136 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.136 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.044 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.044 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:21.045 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:21.302 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:21.560 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 ************************************ 00:12:22.494 START TEST filesystem_in_capsule_ext4 00:12:22.494 ************************************ 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:22.494 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:22.494 mke2fs 1.47.0 (5-Feb-2023) 00:12:22.751 Discarding device blocks: 0/522240 done 00:12:22.752 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:22.752 Filesystem UUID: bb5db9b3-5617-4a35-bea1-7400a0535dfa 00:12:22.752 Superblock backups stored on blocks: 00:12:22.752 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:22.752 00:12:22.752 Allocating group tables: 0/64 done 00:12:22.752 Writing inode tables: 0/64 done 00:12:23.317 Creating journal (8192 blocks): done 00:12:25.515 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:25.515 00:12:25.515 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:25.515 18:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:30.777 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2895207 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:30.777 00:12:30.777 real 0m8.338s 00:12:30.777 user 0m0.013s 00:12:30.777 sys 0m0.069s 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:30.777 ************************************ 00:12:30.777 END TEST filesystem_in_capsule_ext4 00:12:30.777 ************************************ 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.777 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.034 ************************************ 00:12:31.034 START TEST filesystem_in_capsule_btrfs 00:12:31.034 ************************************ 00:12:31.034 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:31.034 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:31.035 btrfs-progs v6.8.1 00:12:31.035 See https://btrfs.readthedocs.io for more information. 00:12:31.035 00:12:31.035 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:31.035 NOTE: several default settings have changed in version 5.15, please make sure 00:12:31.035 this does not affect your deployments: 00:12:31.035 - DUP for metadata (-m dup) 00:12:31.035 - enabled no-holes (-O no-holes) 00:12:31.035 - enabled free-space-tree (-R free-space-tree) 00:12:31.035 00:12:31.035 Label: (null) 00:12:31.035 UUID: 130f2679-51ef-48ec-8605-64e121ab323b 00:12:31.035 Node size: 16384 00:12:31.035 Sector size: 4096 (CPU page size: 4096) 00:12:31.035 Filesystem size: 510.00MiB 00:12:31.035 Block group profiles: 00:12:31.035 Data: single 8.00MiB 00:12:31.035 Metadata: DUP 32.00MiB 00:12:31.035 System: DUP 8.00MiB 00:12:31.035 SSD detected: yes 00:12:31.035 Zoned device: no 00:12:31.035 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:31.035 Checksum: crc32c 00:12:31.035 Number of devices: 1 00:12:31.035 Devices: 00:12:31.035 ID SIZE PATH 00:12:31.035 1 510.00MiB /dev/nvme0n1p1 00:12:31.035 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:31.035 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2895207 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.293 00:12:31.293 real 0m0.482s 00:12:31.293 user 0m0.016s 00:12:31.293 sys 0m0.110s 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.293 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 ************************************ 00:12:31.293 END TEST filesystem_in_capsule_btrfs 00:12:31.293 ************************************ 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.552 ************************************ 00:12:31.552 START TEST filesystem_in_capsule_xfs 00:12:31.552 ************************************ 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.552 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:31.553 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:31.553 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:31.553 = sectsz=512 attr=2, projid32bit=1 00:12:31.553 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:31.553 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:31.553 data = bsize=4096 blocks=130560, imaxpct=25 00:12:31.553 = sunit=0 swidth=0 blks 00:12:31.553 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:31.553 log =internal log bsize=4096 blocks=16384, version=2 00:12:31.553 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:31.553 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:32.485 Discarding blocks...Done. 00:12:32.485 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:32.485 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2895207 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.385 00:12:34.385 real 0m2.716s 00:12:34.385 user 0m0.021s 00:12:34.385 sys 0m0.054s 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:34.385 ************************************ 00:12:34.385 END TEST filesystem_in_capsule_xfs 00:12:34.385 ************************************ 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:34.385 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2895207 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2895207 ']' 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2895207 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2895207 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2895207' 00:12:34.643 killing process with pid 2895207 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2895207 00:12:34.643 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2895207 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.173 00:12:37.173 real 0m20.248s 00:12:37.173 user 1m16.668s 00:12:37.173 sys 0m2.549s 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.173 ************************************ 00:12:37.173 END TEST nvmf_filesystem_in_capsule 00:12:37.173 ************************************ 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.173 rmmod nvme_tcp 00:12:37.173 rmmod nvme_fabrics 00:12:37.173 rmmod nvme_keyring 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.173 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.073 00:12:39.073 real 0m43.119s 00:12:39.073 user 2m26.413s 00:12:39.073 sys 0m6.471s 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 ************************************ 00:12:39.073 END TEST nvmf_filesystem 00:12:39.073 ************************************ 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 ************************************ 00:12:39.073 START TEST nvmf_target_discovery 00:12:39.073 ************************************ 00:12:39.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.333 * Looking for test storage... 00:12:39.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.333 --rc genhtml_branch_coverage=1 00:12:39.333 --rc genhtml_function_coverage=1 00:12:39.333 --rc genhtml_legend=1 00:12:39.333 --rc geninfo_all_blocks=1 00:12:39.333 --rc geninfo_unexecuted_blocks=1 00:12:39.333 00:12:39.333 ' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.333 --rc genhtml_branch_coverage=1 00:12:39.333 --rc genhtml_function_coverage=1 00:12:39.333 --rc genhtml_legend=1 00:12:39.333 --rc geninfo_all_blocks=1 00:12:39.333 --rc geninfo_unexecuted_blocks=1 00:12:39.333 00:12:39.333 ' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.333 --rc genhtml_branch_coverage=1 00:12:39.333 --rc genhtml_function_coverage=1 00:12:39.333 --rc genhtml_legend=1 00:12:39.333 --rc geninfo_all_blocks=1 00:12:39.333 --rc geninfo_unexecuted_blocks=1 00:12:39.333 00:12:39.333 ' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.333 --rc genhtml_branch_coverage=1 00:12:39.333 --rc genhtml_function_coverage=1 00:12:39.333 --rc genhtml_legend=1 00:12:39.333 --rc geninfo_all_blocks=1 00:12:39.333 --rc geninfo_unexecuted_blocks=1 00:12:39.333 00:12:39.333 ' 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.333 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.334 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.291 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.292 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:12:41.551 00:12:41.551 --- 10.0.0.2 ping statistics --- 00:12:41.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.551 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:12:41.551 00:12:41.551 --- 10.0.0.1 ping statistics --- 00:12:41.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.551 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2899761 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2899761 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2899761 ']' 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.551 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.551 [2024-11-18 18:19:39.844972] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:41.551 [2024-11-18 18:19:39.845130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.809 [2024-11-18 18:19:39.995510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.809 [2024-11-18 18:19:40.137603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.809 [2024-11-18 18:19:40.137722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.809 [2024-11-18 18:19:40.137749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.809 [2024-11-18 18:19:40.137774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.809 [2024-11-18 18:19:40.137794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.809 [2024-11-18 18:19:40.140676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.809 [2024-11-18 18:19:40.141046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.809 [2024-11-18 18:19:40.141145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.809 [2024-11-18 18:19:40.141149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 [2024-11-18 18:19:40.852466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 Null1 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 [2024-11-18 18:19:40.900842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 Null2 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.745 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 Null3 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 Null4 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.746 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:43.004 00:12:43.004 Discovery Log Number of Records 6, Generation counter 6 00:12:43.004 =====Discovery Log Entry 0====== 00:12:43.004 trtype: tcp 00:12:43.004 adrfam: ipv4 00:12:43.004 subtype: current discovery subsystem 00:12:43.004 treq: not required 00:12:43.004 portid: 0 00:12:43.004 trsvcid: 4420 00:12:43.004 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:43.004 traddr: 10.0.0.2 00:12:43.004 eflags: explicit discovery connections, duplicate discovery information 00:12:43.004 sectype: none 00:12:43.004 =====Discovery Log Entry 1====== 00:12:43.004 trtype: tcp 00:12:43.004 adrfam: ipv4 00:12:43.004 subtype: nvme subsystem 00:12:43.004 treq: not required 00:12:43.004 portid: 0 00:12:43.004 trsvcid: 4420 00:12:43.004 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:43.004 traddr: 10.0.0.2 00:12:43.004 eflags: none 00:12:43.004 sectype: none 00:12:43.004 =====Discovery Log Entry 2====== 00:12:43.004 trtype: tcp 00:12:43.004 adrfam: ipv4 00:12:43.004 subtype: nvme subsystem 00:12:43.004 treq: not required 00:12:43.004 portid: 0 00:12:43.004 trsvcid: 4420 00:12:43.004 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:43.004 traddr: 10.0.0.2 00:12:43.004 eflags: none 00:12:43.004 sectype: none 00:12:43.004 =====Discovery Log Entry 3====== 00:12:43.004 trtype: tcp 00:12:43.004 adrfam: ipv4 00:12:43.004 subtype: nvme subsystem 00:12:43.004 treq: not required 00:12:43.004 portid: 0 00:12:43.004 trsvcid: 4420 00:12:43.004 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:43.004 traddr: 10.0.0.2 00:12:43.004 eflags: none 00:12:43.004 sectype: none 00:12:43.004 =====Discovery Log Entry 4====== 00:12:43.004 trtype: tcp 00:12:43.004 adrfam: ipv4 00:12:43.004 subtype: nvme subsystem 00:12:43.004 treq: not required 00:12:43.005 portid: 0 00:12:43.005 trsvcid: 4420 00:12:43.005 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:43.005 traddr: 10.0.0.2 00:12:43.005 eflags: none 00:12:43.005 sectype: none 00:12:43.005 =====Discovery Log Entry 5====== 00:12:43.005 trtype: tcp 00:12:43.005 adrfam: ipv4 00:12:43.005 subtype: discovery subsystem referral 00:12:43.005 treq: not required 00:12:43.005 portid: 0 00:12:43.005 trsvcid: 4430 00:12:43.005 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:43.005 traddr: 10.0.0.2 00:12:43.005 eflags: none 00:12:43.005 sectype: none 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:43.005 Perform nvmf subsystem discovery via RPC 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 [ 00:12:43.005 { 00:12:43.005 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:43.005 "subtype": "Discovery", 00:12:43.005 "listen_addresses": [ 00:12:43.005 { 00:12:43.005 "trtype": "TCP", 00:12:43.005 "adrfam": "IPv4", 00:12:43.005 "traddr": "10.0.0.2", 00:12:43.005 "trsvcid": "4420" 00:12:43.005 } 00:12:43.005 ], 00:12:43.005 "allow_any_host": true, 00:12:43.005 "hosts": [] 00:12:43.005 }, 00:12:43.005 { 00:12:43.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.005 "subtype": "NVMe", 00:12:43.005 "listen_addresses": [ 00:12:43.005 { 00:12:43.005 "trtype": "TCP", 00:12:43.005 "adrfam": "IPv4", 00:12:43.005 "traddr": "10.0.0.2", 00:12:43.005 "trsvcid": "4420" 00:12:43.005 } 00:12:43.005 ], 00:12:43.005 "allow_any_host": true, 00:12:43.005 "hosts": [], 00:12:43.005 "serial_number": "SPDK00000000000001", 00:12:43.005 "model_number": "SPDK bdev Controller", 00:12:43.005 "max_namespaces": 32, 00:12:43.005 "min_cntlid": 1, 00:12:43.005 "max_cntlid": 65519, 00:12:43.005 "namespaces": [ 00:12:43.005 { 00:12:43.005 "nsid": 1, 00:12:43.005 "bdev_name": "Null1", 00:12:43.005 "name": "Null1", 00:12:43.005 "nguid": "3824EE8CDBD640E6B40461972A05B78A", 00:12:43.005 "uuid": "3824ee8c-dbd6-40e6-b404-61972a05b78a" 00:12:43.005 } 00:12:43.005 ] 00:12:43.005 }, 00:12:43.005 { 00:12:43.005 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:43.005 "subtype": "NVMe", 00:12:43.005 "listen_addresses": [ 00:12:43.005 { 00:12:43.005 "trtype": "TCP", 00:12:43.005 "adrfam": "IPv4", 00:12:43.005 "traddr": "10.0.0.2", 00:12:43.005 "trsvcid": "4420" 00:12:43.005 } 00:12:43.005 ], 00:12:43.005 "allow_any_host": true, 00:12:43.005 "hosts": [], 00:12:43.005 "serial_number": "SPDK00000000000002", 00:12:43.005 "model_number": "SPDK bdev Controller", 00:12:43.005 "max_namespaces": 32, 00:12:43.005 "min_cntlid": 1, 00:12:43.005 "max_cntlid": 65519, 00:12:43.005 "namespaces": [ 00:12:43.005 { 00:12:43.005 "nsid": 1, 00:12:43.005 "bdev_name": "Null2", 00:12:43.005 "name": "Null2", 00:12:43.005 "nguid": "C2C9F558EE7E4F2EAF1FF13D359A55BE", 00:12:43.005 "uuid": "c2c9f558-ee7e-4f2e-af1f-f13d359a55be" 00:12:43.005 } 00:12:43.005 ] 00:12:43.005 }, 00:12:43.005 { 00:12:43.005 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:43.005 "subtype": "NVMe", 00:12:43.005 "listen_addresses": [ 00:12:43.005 { 00:12:43.005 "trtype": "TCP", 00:12:43.005 "adrfam": "IPv4", 00:12:43.005 "traddr": "10.0.0.2", 00:12:43.005 "trsvcid": "4420" 00:12:43.005 } 00:12:43.005 ], 00:12:43.005 "allow_any_host": true, 00:12:43.005 "hosts": [], 00:12:43.005 "serial_number": "SPDK00000000000003", 00:12:43.005 "model_number": "SPDK bdev Controller", 00:12:43.005 "max_namespaces": 32, 00:12:43.005 "min_cntlid": 1, 00:12:43.005 "max_cntlid": 65519, 00:12:43.005 "namespaces": [ 00:12:43.005 { 00:12:43.005 "nsid": 1, 00:12:43.005 "bdev_name": "Null3", 00:12:43.005 "name": "Null3", 00:12:43.005 "nguid": "D5AF90DB31834166A6EC0AA22F3513F4", 00:12:43.005 "uuid": "d5af90db-3183-4166-a6ec-0aa22f3513f4" 00:12:43.005 } 00:12:43.005 ] 00:12:43.005 }, 00:12:43.005 { 00:12:43.005 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:43.005 "subtype": "NVMe", 00:12:43.005 "listen_addresses": [ 00:12:43.005 { 00:12:43.005 "trtype": "TCP", 00:12:43.005 "adrfam": "IPv4", 00:12:43.005 "traddr": "10.0.0.2", 00:12:43.005 "trsvcid": "4420" 00:12:43.005 } 00:12:43.005 ], 00:12:43.005 "allow_any_host": true, 00:12:43.005 "hosts": [], 00:12:43.005 "serial_number": "SPDK00000000000004", 00:12:43.005 "model_number": "SPDK bdev Controller", 00:12:43.005 "max_namespaces": 32, 00:12:43.005 "min_cntlid": 1, 00:12:43.005 "max_cntlid": 65519, 00:12:43.005 "namespaces": [ 00:12:43.005 { 00:12:43.005 "nsid": 1, 00:12:43.005 "bdev_name": "Null4", 00:12:43.005 "name": "Null4", 00:12:43.005 "nguid": "FF4D7CEC53E740259772B48FC2C04F86", 00:12:43.005 "uuid": "ff4d7cec-53e7-4025-9772-b48fc2c04f86" 00:12:43.005 } 00:12:43.005 ] 00:12:43.005 } 00:12:43.005 ] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:43.005 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.006 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.263 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.264 rmmod nvme_tcp 00:12:43.264 rmmod nvme_fabrics 00:12:43.264 rmmod nvme_keyring 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2899761 ']' 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2899761 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2899761 ']' 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2899761 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2899761 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2899761' 00:12:43.264 killing process with pid 2899761 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2899761 00:12:43.264 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2899761 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.639 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.542 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.542 00:12:46.542 real 0m7.272s 00:12:46.542 user 0m9.838s 00:12:46.542 sys 0m2.062s 00:12:46.542 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.542 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.542 ************************************ 00:12:46.542 END TEST nvmf_target_discovery 00:12:46.542 ************************************ 00:12:46.542 18:19:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 ************************************ 00:12:46.543 START TEST nvmf_referrals 00:12:46.543 ************************************ 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:46.543 * Looking for test storage... 00:12:46.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.543 --rc genhtml_branch_coverage=1 00:12:46.543 --rc genhtml_function_coverage=1 00:12:46.543 --rc genhtml_legend=1 00:12:46.543 --rc geninfo_all_blocks=1 00:12:46.543 --rc geninfo_unexecuted_blocks=1 00:12:46.543 00:12:46.543 ' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.543 --rc genhtml_branch_coverage=1 00:12:46.543 --rc genhtml_function_coverage=1 00:12:46.543 --rc genhtml_legend=1 00:12:46.543 --rc geninfo_all_blocks=1 00:12:46.543 --rc geninfo_unexecuted_blocks=1 00:12:46.543 00:12:46.543 ' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.543 --rc genhtml_branch_coverage=1 00:12:46.543 --rc genhtml_function_coverage=1 00:12:46.543 --rc genhtml_legend=1 00:12:46.543 --rc geninfo_all_blocks=1 00:12:46.543 --rc geninfo_unexecuted_blocks=1 00:12:46.543 00:12:46.543 ' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.543 --rc genhtml_branch_coverage=1 00:12:46.543 --rc genhtml_function_coverage=1 00:12:46.543 --rc genhtml_legend=1 00:12:46.543 --rc geninfo_all_blocks=1 00:12:46.543 --rc geninfo_unexecuted_blocks=1 00:12:46.543 00:12:46.543 ' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:46.543 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.544 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:49.102 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:49.102 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:49.102 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.102 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:49.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.103 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:12:49.103 00:12:49.103 --- 10.0.0.2 ping statistics --- 00:12:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.103 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:49.103 00:12:49.103 --- 10.0.0.1 ping statistics --- 00:12:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.103 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2902004 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2902004 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2902004 ']' 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.103 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.103 [2024-11-18 18:19:47.237743] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:49.103 [2024-11-18 18:19:47.237880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.103 [2024-11-18 18:19:47.382151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.361 [2024-11-18 18:19:47.510194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.361 [2024-11-18 18:19:47.510282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.361 [2024-11-18 18:19:47.510305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.361 [2024-11-18 18:19:47.510326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.361 [2024-11-18 18:19:47.510343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.361 [2024-11-18 18:19:47.512843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.361 [2024-11-18 18:19:47.516643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.361 [2024-11-18 18:19:47.516794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.361 [2024-11-18 18:19:47.516795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.928 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.928 [2024-11-18 18:19:48.258470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 [2024-11-18 18:19:48.281706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:50.186 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.187 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.187 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.187 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.187 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.444 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.702 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.963 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.223 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:51.481 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.739 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.739 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.998 rmmod nvme_tcp 00:12:51.998 rmmod nvme_fabrics 00:12:51.998 rmmod nvme_keyring 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2902004 ']' 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2902004 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2902004 ']' 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2902004 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.998 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902004 00:12:52.256 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.256 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.256 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902004' 00:12:52.256 killing process with pid 2902004 00:12:52.256 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2902004 00:12:52.256 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2902004 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.722 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.722 00:12:55.722 real 0m8.811s 00:12:55.722 user 0m16.347s 00:12:55.722 sys 0m2.559s 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.723 ************************************ 00:12:55.723 END TEST nvmf_referrals 00:12:55.723 ************************************ 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.723 ************************************ 00:12:55.723 START TEST nvmf_connect_disconnect 00:12:55.723 ************************************ 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:55.723 * Looking for test storage... 00:12:55.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.723 --rc genhtml_branch_coverage=1 00:12:55.723 --rc genhtml_function_coverage=1 00:12:55.723 --rc genhtml_legend=1 00:12:55.723 --rc geninfo_all_blocks=1 00:12:55.723 --rc geninfo_unexecuted_blocks=1 00:12:55.723 00:12:55.723 ' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.723 --rc genhtml_branch_coverage=1 00:12:55.723 --rc genhtml_function_coverage=1 00:12:55.723 --rc genhtml_legend=1 00:12:55.723 --rc geninfo_all_blocks=1 00:12:55.723 --rc geninfo_unexecuted_blocks=1 00:12:55.723 00:12:55.723 ' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.723 --rc genhtml_branch_coverage=1 00:12:55.723 --rc genhtml_function_coverage=1 00:12:55.723 --rc genhtml_legend=1 00:12:55.723 --rc geninfo_all_blocks=1 00:12:55.723 --rc geninfo_unexecuted_blocks=1 00:12:55.723 00:12:55.723 ' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.723 --rc genhtml_branch_coverage=1 00:12:55.723 --rc genhtml_function_coverage=1 00:12:55.723 --rc genhtml_legend=1 00:12:55.723 --rc geninfo_all_blocks=1 00:12:55.723 --rc geninfo_unexecuted_blocks=1 00:12:55.723 00:12:55.723 ' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.723 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.724 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.628 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:12:57.629 00:12:57.629 --- 10.0.0.2 ping statistics --- 00:12:57.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.629 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:57.629 00:12:57.629 --- 10.0.0.1 ping statistics --- 00:12:57.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.629 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.629 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2904567 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2904567 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2904567 ']' 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.887 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.887 [2024-11-18 18:19:56.052401] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:57.887 [2024-11-18 18:19:56.052538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.887 [2024-11-18 18:19:56.204836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.145 [2024-11-18 18:19:56.352350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.145 [2024-11-18 18:19:56.352441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.145 [2024-11-18 18:19:56.352468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.145 [2024-11-18 18:19:56.352493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.145 [2024-11-18 18:19:56.352513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.145 [2024-11-18 18:19:56.355632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.145 [2024-11-18 18:19:56.355670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.145 [2024-11-18 18:19:56.355705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.145 [2024-11-18 18:19:56.355696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.078 [2024-11-18 18:19:57.143279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:59.078 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.079 [2024-11-18 18:19:57.266286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:59.079 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:01.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.536 rmmod nvme_tcp 00:16:56.536 rmmod nvme_fabrics 00:16:56.536 rmmod nvme_keyring 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2904567 ']' 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2904567 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2904567 ']' 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2904567 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904567 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904567' 00:16:56.536 killing process with pid 2904567 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2904567 00:16:56.536 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2904567 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.916 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.947 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.948 00:16:59.948 real 4m4.376s 00:16:59.948 user 15m23.592s 00:16:59.948 sys 0m39.964s 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:59.948 ************************************ 00:16:59.948 END TEST nvmf_connect_disconnect 00:16:59.948 ************************************ 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.948 ************************************ 00:16:59.948 START TEST nvmf_multitarget 00:16:59.948 ************************************ 00:16:59.948 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.948 * Looking for test storage... 00:16:59.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.948 --rc genhtml_branch_coverage=1 00:16:59.948 --rc genhtml_function_coverage=1 00:16:59.948 --rc genhtml_legend=1 00:16:59.948 --rc geninfo_all_blocks=1 00:16:59.948 --rc geninfo_unexecuted_blocks=1 00:16:59.948 00:16:59.948 ' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.948 --rc genhtml_branch_coverage=1 00:16:59.948 --rc genhtml_function_coverage=1 00:16:59.948 --rc genhtml_legend=1 00:16:59.948 --rc geninfo_all_blocks=1 00:16:59.948 --rc geninfo_unexecuted_blocks=1 00:16:59.948 00:16:59.948 ' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.948 --rc genhtml_branch_coverage=1 00:16:59.948 --rc genhtml_function_coverage=1 00:16:59.948 --rc genhtml_legend=1 00:16:59.948 --rc geninfo_all_blocks=1 00:16:59.948 --rc geninfo_unexecuted_blocks=1 00:16:59.948 00:16:59.948 ' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.948 --rc genhtml_branch_coverage=1 00:16:59.948 --rc genhtml_function_coverage=1 00:16:59.948 --rc genhtml_legend=1 00:16:59.948 --rc geninfo_all_blocks=1 00:16:59.948 --rc geninfo_unexecuted_blocks=1 00:16:59.948 00:16:59.948 ' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.948 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.949 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.852 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.853 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.112 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:02.112 00:17:02.112 --- 10.0.0.2 ping statistics --- 00:17:02.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.112 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:02.371 00:17:02.371 --- 10.0.0.1 ping statistics --- 00:17:02.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.371 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2936625 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2936625 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2936625 ']' 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.371 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:02.371 [2024-11-18 18:24:00.568441] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:02.371 [2024-11-18 18:24:00.568633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.630 [2024-11-18 18:24:00.713631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.630 [2024-11-18 18:24:00.838571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.630 [2024-11-18 18:24:00.838675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.630 [2024-11-18 18:24:00.838699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.630 [2024-11-18 18:24:00.838720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.630 [2024-11-18 18:24:00.838736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.630 [2024-11-18 18:24:00.841352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.630 [2024-11-18 18:24:00.841420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.630 [2024-11-18 18:24:00.841464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.630 [2024-11-18 18:24:00.841471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:03.565 "nvmf_tgt_1" 00:17:03.565 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:03.822 "nvmf_tgt_2" 00:17:03.822 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:03.822 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:03.822 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:03.822 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:04.081 true 00:17:04.081 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:04.081 true 00:17:04.081 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:04.081 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.340 rmmod nvme_tcp 00:17:04.340 rmmod nvme_fabrics 00:17:04.340 rmmod nvme_keyring 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2936625 ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2936625 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2936625 ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2936625 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936625 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936625' 00:17:04.340 killing process with pid 2936625 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2936625 00:17:04.340 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2936625 00:17:05.276 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.276 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.276 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.276 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:05.534 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:05.534 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.535 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.440 00:17:07.440 real 0m7.680s 00:17:07.440 user 0m12.242s 00:17:07.440 sys 0m2.186s 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:07.440 ************************************ 00:17:07.440 END TEST nvmf_multitarget 00:17:07.440 ************************************ 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.440 ************************************ 00:17:07.440 START TEST nvmf_rpc 00:17:07.440 ************************************ 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.440 * Looking for test storage... 00:17:07.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:07.440 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.700 --rc genhtml_branch_coverage=1 00:17:07.700 --rc genhtml_function_coverage=1 00:17:07.700 --rc genhtml_legend=1 00:17:07.700 --rc geninfo_all_blocks=1 00:17:07.700 --rc geninfo_unexecuted_blocks=1 00:17:07.700 00:17:07.700 ' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.700 --rc genhtml_branch_coverage=1 00:17:07.700 --rc genhtml_function_coverage=1 00:17:07.700 --rc genhtml_legend=1 00:17:07.700 --rc geninfo_all_blocks=1 00:17:07.700 --rc geninfo_unexecuted_blocks=1 00:17:07.700 00:17:07.700 ' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.700 --rc genhtml_branch_coverage=1 00:17:07.700 --rc genhtml_function_coverage=1 00:17:07.700 --rc genhtml_legend=1 00:17:07.700 --rc geninfo_all_blocks=1 00:17:07.700 --rc geninfo_unexecuted_blocks=1 00:17:07.700 00:17:07.700 ' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.700 --rc genhtml_branch_coverage=1 00:17:07.700 --rc genhtml_function_coverage=1 00:17:07.700 --rc genhtml_legend=1 00:17:07.700 --rc geninfo_all_blocks=1 00:17:07.700 --rc geninfo_unexecuted_blocks=1 00:17:07.700 00:17:07.700 ' 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.700 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.701 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:09.604 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.605 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.864 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:17:09.864 00:17:09.864 --- 10.0.0.2 ping statistics --- 00:17:09.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.864 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:09.864 00:17:09.864 --- 10.0.0.1 ping statistics --- 00:17:09.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.864 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2939378 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2939378 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2939378 ']' 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.864 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 [2024-11-18 18:24:08.181127] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:09.864 [2024-11-18 18:24:08.181259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.122 [2024-11-18 18:24:08.325433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.381 [2024-11-18 18:24:08.462341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.381 [2024-11-18 18:24:08.462418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.381 [2024-11-18 18:24:08.462443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.381 [2024-11-18 18:24:08.462467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.381 [2024-11-18 18:24:08.462486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.381 [2024-11-18 18:24:08.465357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.381 [2024-11-18 18:24:08.465438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.381 [2024-11-18 18:24:08.465533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.381 [2024-11-18 18:24:08.465538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:10.946 "tick_rate": 2700000000, 00:17:10.946 "poll_groups": [ 00:17:10.946 { 00:17:10.946 "name": "nvmf_tgt_poll_group_000", 00:17:10.946 "admin_qpairs": 0, 00:17:10.946 "io_qpairs": 0, 00:17:10.946 "current_admin_qpairs": 0, 00:17:10.946 "current_io_qpairs": 0, 00:17:10.946 "pending_bdev_io": 0, 00:17:10.946 "completed_nvme_io": 0, 00:17:10.946 "transports": [] 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "nvmf_tgt_poll_group_001", 00:17:10.946 "admin_qpairs": 0, 00:17:10.946 "io_qpairs": 0, 00:17:10.946 "current_admin_qpairs": 0, 00:17:10.946 "current_io_qpairs": 0, 00:17:10.946 "pending_bdev_io": 0, 00:17:10.946 "completed_nvme_io": 0, 00:17:10.946 "transports": [] 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "nvmf_tgt_poll_group_002", 00:17:10.946 "admin_qpairs": 0, 00:17:10.946 "io_qpairs": 0, 00:17:10.946 "current_admin_qpairs": 0, 00:17:10.946 "current_io_qpairs": 0, 00:17:10.946 "pending_bdev_io": 0, 00:17:10.946 "completed_nvme_io": 0, 00:17:10.946 "transports": [] 00:17:10.946 }, 00:17:10.946 { 00:17:10.946 "name": "nvmf_tgt_poll_group_003", 00:17:10.946 "admin_qpairs": 0, 00:17:10.946 "io_qpairs": 0, 00:17:10.946 "current_admin_qpairs": 0, 00:17:10.946 "current_io_qpairs": 0, 00:17:10.946 "pending_bdev_io": 0, 00:17:10.946 "completed_nvme_io": 0, 00:17:10.946 "transports": [] 00:17:10.946 } 00:17:10.946 ] 00:17:10.946 }' 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.946 [2024-11-18 18:24:09.251496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.946 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.203 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:11.204 "tick_rate": 2700000000, 00:17:11.204 "poll_groups": [ 00:17:11.204 { 00:17:11.204 "name": "nvmf_tgt_poll_group_000", 00:17:11.204 "admin_qpairs": 0, 00:17:11.204 "io_qpairs": 0, 00:17:11.204 "current_admin_qpairs": 0, 00:17:11.204 "current_io_qpairs": 0, 00:17:11.204 "pending_bdev_io": 0, 00:17:11.204 "completed_nvme_io": 0, 00:17:11.204 "transports": [ 00:17:11.204 { 00:17:11.204 "trtype": "TCP" 00:17:11.204 } 00:17:11.204 ] 00:17:11.204 }, 00:17:11.204 { 00:17:11.204 "name": "nvmf_tgt_poll_group_001", 00:17:11.204 "admin_qpairs": 0, 00:17:11.204 "io_qpairs": 0, 00:17:11.204 "current_admin_qpairs": 0, 00:17:11.204 "current_io_qpairs": 0, 00:17:11.204 "pending_bdev_io": 0, 00:17:11.204 "completed_nvme_io": 0, 00:17:11.204 "transports": [ 00:17:11.204 { 00:17:11.204 "trtype": "TCP" 00:17:11.204 } 00:17:11.204 ] 00:17:11.204 }, 00:17:11.204 { 00:17:11.204 "name": "nvmf_tgt_poll_group_002", 00:17:11.204 "admin_qpairs": 0, 00:17:11.204 "io_qpairs": 0, 00:17:11.204 "current_admin_qpairs": 0, 00:17:11.204 "current_io_qpairs": 0, 00:17:11.204 "pending_bdev_io": 0, 00:17:11.204 "completed_nvme_io": 0, 00:17:11.204 "transports": [ 00:17:11.204 { 00:17:11.204 "trtype": "TCP" 00:17:11.204 } 00:17:11.204 ] 00:17:11.204 }, 00:17:11.204 { 00:17:11.204 "name": "nvmf_tgt_poll_group_003", 00:17:11.204 "admin_qpairs": 0, 00:17:11.204 "io_qpairs": 0, 00:17:11.204 "current_admin_qpairs": 0, 00:17:11.204 "current_io_qpairs": 0, 00:17:11.204 "pending_bdev_io": 0, 00:17:11.204 "completed_nvme_io": 0, 00:17:11.204 "transports": [ 00:17:11.204 { 00:17:11.204 "trtype": "TCP" 00:17:11.204 } 00:17:11.204 ] 00:17:11.204 } 00:17:11.204 ] 00:17:11.204 }' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 Malloc1 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 [2024-11-18 18:24:09.466652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.204 [2024-11-18 18:24:09.489937] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:11.204 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:11.204 could not add new controller: failed to write to nvme-fabrics device 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.136 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.136 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:12.136 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.136 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:12.136 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.034 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.292 [2024-11-18 18:24:12.388825] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:14.292 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:14.292 could not add new controller: failed to write to nvme-fabrics device 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.292 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.858 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.858 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.858 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.858 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.858 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.384 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 [2024-11-18 18:24:15.423876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.385 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.950 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.950 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.950 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.950 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:17.950 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:19.849 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 [2024-11-18 18:24:18.307136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.107 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.044 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.044 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.044 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.044 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.044 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 [2024-11-18 18:24:21.190691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.941 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.507 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.507 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.507 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.507 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:23.507 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 [2024-11-18 18:24:24.016004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.034 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.600 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.600 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.600 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.600 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.600 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.498 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 [2024-11-18 18:24:26.905652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.756 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.322 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.322 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:29.322 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.322 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:29.322 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.218 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 [2024-11-18 18:24:29.751105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 [2024-11-18 18:24:29.799148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.477 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 [2024-11-18 18:24:29.847311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 [2024-11-18 18:24:29.895427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.736 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 [2024-11-18 18:24:29.943640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:31.737 "tick_rate": 2700000000, 00:17:31.737 "poll_groups": [ 00:17:31.737 { 00:17:31.737 "name": "nvmf_tgt_poll_group_000", 00:17:31.737 "admin_qpairs": 2, 00:17:31.737 "io_qpairs": 84, 00:17:31.737 "current_admin_qpairs": 0, 00:17:31.737 "current_io_qpairs": 0, 00:17:31.737 "pending_bdev_io": 0, 00:17:31.737 "completed_nvme_io": 185, 00:17:31.737 "transports": [ 00:17:31.737 { 00:17:31.737 "trtype": "TCP" 00:17:31.737 } 00:17:31.737 ] 00:17:31.737 }, 00:17:31.737 { 00:17:31.737 "name": "nvmf_tgt_poll_group_001", 00:17:31.737 "admin_qpairs": 2, 00:17:31.737 "io_qpairs": 84, 00:17:31.737 "current_admin_qpairs": 0, 00:17:31.737 "current_io_qpairs": 0, 00:17:31.737 "pending_bdev_io": 0, 00:17:31.737 "completed_nvme_io": 84, 00:17:31.737 "transports": [ 00:17:31.737 { 00:17:31.737 "trtype": "TCP" 00:17:31.737 } 00:17:31.737 ] 00:17:31.737 }, 00:17:31.737 { 00:17:31.737 "name": "nvmf_tgt_poll_group_002", 00:17:31.737 "admin_qpairs": 1, 00:17:31.737 "io_qpairs": 84, 00:17:31.737 "current_admin_qpairs": 0, 00:17:31.737 "current_io_qpairs": 0, 00:17:31.737 "pending_bdev_io": 0, 00:17:31.737 "completed_nvme_io": 184, 00:17:31.737 "transports": [ 00:17:31.737 { 00:17:31.737 "trtype": "TCP" 00:17:31.737 } 00:17:31.737 ] 00:17:31.737 }, 00:17:31.737 { 00:17:31.737 "name": "nvmf_tgt_poll_group_003", 00:17:31.737 "admin_qpairs": 2, 00:17:31.737 "io_qpairs": 84, 00:17:31.737 "current_admin_qpairs": 0, 00:17:31.737 "current_io_qpairs": 0, 00:17:31.737 "pending_bdev_io": 0, 00:17:31.737 "completed_nvme_io": 233, 00:17:31.737 "transports": [ 00:17:31.737 { 00:17:31.737 "trtype": "TCP" 00:17:31.737 } 00:17:31.737 ] 00:17:31.737 } 00:17:31.737 ] 00:17:31.737 }' 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:31.737 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.737 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:31.737 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:31.737 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:31.737 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:31.737 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.995 rmmod nvme_tcp 00:17:31.995 rmmod nvme_fabrics 00:17:31.995 rmmod nvme_keyring 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2939378 ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2939378 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2939378 ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2939378 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2939378 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2939378' 00:17:31.995 killing process with pid 2939378 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2939378 00:17:31.995 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2939378 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.369 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.326 00:17:35.326 real 0m27.849s 00:17:35.326 user 1m29.383s 00:17:35.326 sys 0m4.731s 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 ************************************ 00:17:35.326 END TEST nvmf_rpc 00:17:35.326 ************************************ 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 ************************************ 00:17:35.326 START TEST nvmf_invalid 00:17:35.326 ************************************ 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:35.326 * Looking for test storage... 00:17:35.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:35.326 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.585 --rc genhtml_branch_coverage=1 00:17:35.585 --rc genhtml_function_coverage=1 00:17:35.585 --rc genhtml_legend=1 00:17:35.585 --rc geninfo_all_blocks=1 00:17:35.585 --rc geninfo_unexecuted_blocks=1 00:17:35.585 00:17:35.585 ' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.585 --rc genhtml_branch_coverage=1 00:17:35.585 --rc genhtml_function_coverage=1 00:17:35.585 --rc genhtml_legend=1 00:17:35.585 --rc geninfo_all_blocks=1 00:17:35.585 --rc geninfo_unexecuted_blocks=1 00:17:35.585 00:17:35.585 ' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.585 --rc genhtml_branch_coverage=1 00:17:35.585 --rc genhtml_function_coverage=1 00:17:35.585 --rc genhtml_legend=1 00:17:35.585 --rc geninfo_all_blocks=1 00:17:35.585 --rc geninfo_unexecuted_blocks=1 00:17:35.585 00:17:35.585 ' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.585 --rc genhtml_branch_coverage=1 00:17:35.585 --rc genhtml_function_coverage=1 00:17:35.585 --rc genhtml_legend=1 00:17:35.585 --rc geninfo_all_blocks=1 00:17:35.585 --rc geninfo_unexecuted_blocks=1 00:17:35.585 00:17:35.585 ' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.585 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.586 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.487 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:37.488 00:17:37.488 --- 10.0.0.2 ping statistics --- 00:17:37.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.488 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:37.488 00:17:37.488 --- 10.0.0.1 ping statistics --- 00:17:37.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.488 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.488 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2944326 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2944326 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2944326 ']' 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.489 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.747 [2024-11-18 18:24:35.900012] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:37.747 [2024-11-18 18:24:35.900154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.747 [2024-11-18 18:24:36.053036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.005 [2024-11-18 18:24:36.198807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.005 [2024-11-18 18:24:36.198901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.005 [2024-11-18 18:24:36.198928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.005 [2024-11-18 18:24:36.198955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.005 [2024-11-18 18:24:36.198975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.005 [2024-11-18 18:24:36.201884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.005 [2024-11-18 18:24:36.201949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.005 [2024-11-18 18:24:36.202002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.005 [2024-11-18 18:24:36.202019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.570 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.570 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:38.570 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.571 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.571 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.571 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.571 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:38.571 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7400 00:17:38.828 [2024-11-18 18:24:37.141068] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:38.828 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:38.828 { 00:17:38.828 "nqn": "nqn.2016-06.io.spdk:cnode7400", 00:17:38.828 "tgt_name": "foobar", 00:17:38.828 "method": "nvmf_create_subsystem", 00:17:38.828 "req_id": 1 00:17:38.828 } 00:17:38.828 Got JSON-RPC error response 00:17:38.828 response: 00:17:38.828 { 00:17:38.828 "code": -32603, 00:17:38.828 "message": "Unable to find target foobar" 00:17:38.828 }' 00:17:38.828 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:38.828 { 00:17:38.828 "nqn": "nqn.2016-06.io.spdk:cnode7400", 00:17:38.828 "tgt_name": "foobar", 00:17:38.828 "method": "nvmf_create_subsystem", 00:17:38.828 "req_id": 1 00:17:38.828 } 00:17:38.828 Got JSON-RPC error response 00:17:38.828 response: 00:17:38.828 { 00:17:38.828 "code": -32603, 00:17:38.828 "message": "Unable to find target foobar" 00:17:38.828 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:38.828 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:38.828 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25625 00:17:39.087 [2024-11-18 18:24:37.397959] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25625: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:39.087 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:39.087 { 00:17:39.087 "nqn": "nqn.2016-06.io.spdk:cnode25625", 00:17:39.087 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:39.087 "method": "nvmf_create_subsystem", 00:17:39.087 "req_id": 1 00:17:39.087 } 00:17:39.087 Got JSON-RPC error response 00:17:39.087 response: 00:17:39.087 { 00:17:39.087 "code": -32602, 00:17:39.087 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:39.087 }' 00:17:39.087 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:39.087 { 00:17:39.087 "nqn": "nqn.2016-06.io.spdk:cnode25625", 00:17:39.087 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:39.087 "method": "nvmf_create_subsystem", 00:17:39.087 "req_id": 1 00:17:39.087 } 00:17:39.087 Got JSON-RPC error response 00:17:39.087 response: 00:17:39.087 { 00:17:39.087 "code": -32602, 00:17:39.087 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:39.087 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:39.087 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:39.087 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7779 00:17:39.345 [2024-11-18 18:24:37.666841] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7779: invalid model number 'SPDK_Controller' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:39.603 { 00:17:39.603 "nqn": "nqn.2016-06.io.spdk:cnode7779", 00:17:39.603 "model_number": "SPDK_Controller\u001f", 00:17:39.603 "method": "nvmf_create_subsystem", 00:17:39.603 "req_id": 1 00:17:39.603 } 00:17:39.603 Got JSON-RPC error response 00:17:39.603 response: 00:17:39.603 { 00:17:39.603 "code": -32602, 00:17:39.603 "message": "Invalid MN SPDK_Controller\u001f" 00:17:39.603 }' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:39.603 { 00:17:39.603 "nqn": "nqn.2016-06.io.spdk:cnode7779", 00:17:39.603 "model_number": "SPDK_Controller\u001f", 00:17:39.603 "method": "nvmf_create_subsystem", 00:17:39.603 "req_id": 1 00:17:39.603 } 00:17:39.603 Got JSON-RPC error response 00:17:39.603 response: 00:17:39.603 { 00:17:39.603 "code": -32602, 00:17:39.603 "message": "Invalid MN SPDK_Controller\u001f" 00:17:39.603 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p_zb,)+{E2\$:).8kX*#j' 00:17:39.604 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p_zb,)+{E2\$:).8kX*#j' nqn.2016-06.io.spdk:cnode24152 00:17:39.863 [2024-11-18 18:24:38.024103] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24152: invalid serial number 'p_zb,)+{E2\$:).8kX*#j' 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:39.863 { 00:17:39.863 "nqn": "nqn.2016-06.io.spdk:cnode24152", 00:17:39.863 "serial_number": "p_zb,)+{E2\\$:).8kX*#j", 00:17:39.863 "method": "nvmf_create_subsystem", 00:17:39.863 "req_id": 1 00:17:39.863 } 00:17:39.863 Got JSON-RPC error response 00:17:39.863 response: 00:17:39.863 { 00:17:39.863 "code": -32602, 00:17:39.863 "message": "Invalid SN p_zb,)+{E2\\$:).8kX*#j" 00:17:39.863 }' 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:39.863 { 00:17:39.863 "nqn": "nqn.2016-06.io.spdk:cnode24152", 00:17:39.863 "serial_number": "p_zb,)+{E2\\$:).8kX*#j", 00:17:39.863 "method": "nvmf_create_subsystem", 00:17:39.863 "req_id": 1 00:17:39.863 } 00:17:39.863 Got JSON-RPC error response 00:17:39.863 response: 00:17:39.863 { 00:17:39.863 "code": -32602, 00:17:39.863 "message": "Invalid SN p_zb,)+{E2\\$:).8kX*#j" 00:17:39.863 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.863 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:39.864 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:39.865 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y' 00:17:40.123 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y' nqn.2016-06.io.spdk:cnode19087 00:17:40.123 [2024-11-18 18:24:38.453489] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19087: invalid model number 'hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y' 00:17:40.380 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:40.380 { 00:17:40.380 "nqn": "nqn.2016-06.io.spdk:cnode19087", 00:17:40.380 "model_number": "hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y", 00:17:40.380 "method": "nvmf_create_subsystem", 00:17:40.380 "req_id": 1 00:17:40.380 } 00:17:40.380 Got JSON-RPC error response 00:17:40.380 response: 00:17:40.380 { 00:17:40.380 "code": -32602, 00:17:40.380 "message": "Invalid MN hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y" 00:17:40.380 }' 00:17:40.380 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:40.380 { 00:17:40.380 "nqn": "nqn.2016-06.io.spdk:cnode19087", 00:17:40.380 "model_number": "hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y", 00:17:40.380 "method": "nvmf_create_subsystem", 00:17:40.380 "req_id": 1 00:17:40.380 } 00:17:40.380 Got JSON-RPC error response 00:17:40.380 response: 00:17:40.380 { 00:17:40.380 "code": -32602, 00:17:40.380 "message": "Invalid MN hD8UpJgQZyN4gE{[bW,yR [{,8Gr$?(R|(Kc;T;{Y" 00:17:40.380 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:40.380 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:40.638 [2024-11-18 18:24:38.718562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.638 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:40.896 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:40.896 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:40.896 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:40.896 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:40.896 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:41.152 [2024-11-18 18:24:39.289628] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:41.152 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:41.152 { 00:17:41.152 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:41.152 "listen_address": { 00:17:41.152 "trtype": "tcp", 00:17:41.152 "traddr": "", 00:17:41.152 "trsvcid": "4421" 00:17:41.152 }, 00:17:41.152 "method": "nvmf_subsystem_remove_listener", 00:17:41.152 "req_id": 1 00:17:41.152 } 00:17:41.152 Got JSON-RPC error response 00:17:41.152 response: 00:17:41.152 { 00:17:41.152 "code": -32602, 00:17:41.152 "message": "Invalid parameters" 00:17:41.152 }' 00:17:41.152 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:41.152 { 00:17:41.152 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:41.152 "listen_address": { 00:17:41.152 "trtype": "tcp", 00:17:41.152 "traddr": "", 00:17:41.152 "trsvcid": "4421" 00:17:41.152 }, 00:17:41.152 "method": "nvmf_subsystem_remove_listener", 00:17:41.152 "req_id": 1 00:17:41.152 } 00:17:41.152 Got JSON-RPC error response 00:17:41.152 response: 00:17:41.152 { 00:17:41.152 "code": -32602, 00:17:41.152 "message": "Invalid parameters" 00:17:41.152 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:41.152 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8706 -i 0 00:17:41.408 [2024-11-18 18:24:39.554459] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8706: invalid cntlid range [0-65519] 00:17:41.408 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:41.408 { 00:17:41.408 "nqn": "nqn.2016-06.io.spdk:cnode8706", 00:17:41.409 "min_cntlid": 0, 00:17:41.409 "method": "nvmf_create_subsystem", 00:17:41.409 "req_id": 1 00:17:41.409 } 00:17:41.409 Got JSON-RPC error response 00:17:41.409 response: 00:17:41.409 { 00:17:41.409 "code": -32602, 00:17:41.409 "message": "Invalid cntlid range [0-65519]" 00:17:41.409 }' 00:17:41.409 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:41.409 { 00:17:41.409 "nqn": "nqn.2016-06.io.spdk:cnode8706", 00:17:41.409 "min_cntlid": 0, 00:17:41.409 "method": "nvmf_create_subsystem", 00:17:41.409 "req_id": 1 00:17:41.409 } 00:17:41.409 Got JSON-RPC error response 00:17:41.409 response: 00:17:41.409 { 00:17:41.409 "code": -32602, 00:17:41.409 "message": "Invalid cntlid range [0-65519]" 00:17:41.409 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:41.409 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19343 -i 65520 00:17:41.666 [2024-11-18 18:24:39.823344] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19343: invalid cntlid range [65520-65519] 00:17:41.666 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:41.666 { 00:17:41.666 "nqn": "nqn.2016-06.io.spdk:cnode19343", 00:17:41.666 "min_cntlid": 65520, 00:17:41.666 "method": "nvmf_create_subsystem", 00:17:41.666 "req_id": 1 00:17:41.666 } 00:17:41.666 Got JSON-RPC error response 00:17:41.666 response: 00:17:41.666 { 00:17:41.666 "code": -32602, 00:17:41.666 "message": "Invalid cntlid range [65520-65519]" 00:17:41.666 }' 00:17:41.666 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:41.666 { 00:17:41.666 "nqn": "nqn.2016-06.io.spdk:cnode19343", 00:17:41.666 "min_cntlid": 65520, 00:17:41.666 "method": "nvmf_create_subsystem", 00:17:41.666 "req_id": 1 00:17:41.666 } 00:17:41.666 Got JSON-RPC error response 00:17:41.666 response: 00:17:41.666 { 00:17:41.666 "code": -32602, 00:17:41.666 "message": "Invalid cntlid range [65520-65519]" 00:17:41.666 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:41.666 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1046 -I 0 00:17:41.924 [2024-11-18 18:24:40.120546] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1046: invalid cntlid range [1-0] 00:17:41.924 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:41.924 { 00:17:41.924 "nqn": "nqn.2016-06.io.spdk:cnode1046", 00:17:41.924 "max_cntlid": 0, 00:17:41.924 "method": "nvmf_create_subsystem", 00:17:41.924 "req_id": 1 00:17:41.924 } 00:17:41.924 Got JSON-RPC error response 00:17:41.924 response: 00:17:41.924 { 00:17:41.924 "code": -32602, 00:17:41.924 "message": "Invalid cntlid range [1-0]" 00:17:41.924 }' 00:17:41.924 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:41.924 { 00:17:41.924 "nqn": "nqn.2016-06.io.spdk:cnode1046", 00:17:41.924 "max_cntlid": 0, 00:17:41.924 "method": "nvmf_create_subsystem", 00:17:41.924 "req_id": 1 00:17:41.924 } 00:17:41.924 Got JSON-RPC error response 00:17:41.924 response: 00:17:41.924 { 00:17:41.924 "code": -32602, 00:17:41.924 "message": "Invalid cntlid range [1-0]" 00:17:41.924 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:41.924 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25330 -I 65520 00:17:42.184 [2024-11-18 18:24:40.397400] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25330: invalid cntlid range [1-65520] 00:17:42.184 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:42.184 { 00:17:42.184 "nqn": "nqn.2016-06.io.spdk:cnode25330", 00:17:42.184 "max_cntlid": 65520, 00:17:42.184 "method": "nvmf_create_subsystem", 00:17:42.184 "req_id": 1 00:17:42.184 } 00:17:42.184 Got JSON-RPC error response 00:17:42.184 response: 00:17:42.184 { 00:17:42.184 "code": -32602, 00:17:42.184 "message": "Invalid cntlid range [1-65520]" 00:17:42.184 }' 00:17:42.184 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:42.184 { 00:17:42.184 "nqn": "nqn.2016-06.io.spdk:cnode25330", 00:17:42.184 "max_cntlid": 65520, 00:17:42.184 "method": "nvmf_create_subsystem", 00:17:42.184 "req_id": 1 00:17:42.184 } 00:17:42.184 Got JSON-RPC error response 00:17:42.184 response: 00:17:42.184 { 00:17:42.184 "code": -32602, 00:17:42.184 "message": "Invalid cntlid range [1-65520]" 00:17:42.184 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.184 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6188 -i 6 -I 5 00:17:42.441 [2024-11-18 18:24:40.674367] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6188: invalid cntlid range [6-5] 00:17:42.441 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:42.441 { 00:17:42.441 "nqn": "nqn.2016-06.io.spdk:cnode6188", 00:17:42.441 "min_cntlid": 6, 00:17:42.441 "max_cntlid": 5, 00:17:42.441 "method": "nvmf_create_subsystem", 00:17:42.441 "req_id": 1 00:17:42.441 } 00:17:42.441 Got JSON-RPC error response 00:17:42.441 response: 00:17:42.441 { 00:17:42.441 "code": -32602, 00:17:42.441 "message": "Invalid cntlid range [6-5]" 00:17:42.441 }' 00:17:42.441 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:42.441 { 00:17:42.441 "nqn": "nqn.2016-06.io.spdk:cnode6188", 00:17:42.441 "min_cntlid": 6, 00:17:42.441 "max_cntlid": 5, 00:17:42.441 "method": "nvmf_create_subsystem", 00:17:42.441 "req_id": 1 00:17:42.441 } 00:17:42.441 Got JSON-RPC error response 00:17:42.441 response: 00:17:42.441 { 00:17:42.441 "code": -32602, 00:17:42.441 "message": "Invalid cntlid range [6-5]" 00:17:42.441 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.441 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:42.699 { 00:17:42.699 "name": "foobar", 00:17:42.699 "method": "nvmf_delete_target", 00:17:42.699 "req_id": 1 00:17:42.699 } 00:17:42.699 Got JSON-RPC error response 00:17:42.699 response: 00:17:42.699 { 00:17:42.699 "code": -32602, 00:17:42.699 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:42.699 }' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:42.699 { 00:17:42.699 "name": "foobar", 00:17:42.699 "method": "nvmf_delete_target", 00:17:42.699 "req_id": 1 00:17:42.699 } 00:17:42.699 Got JSON-RPC error response 00:17:42.699 response: 00:17:42.699 { 00:17:42.699 "code": -32602, 00:17:42.699 "message": "The specified target doesn't exist, cannot delete it." 00:17:42.699 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.699 rmmod nvme_tcp 00:17:42.699 rmmod nvme_fabrics 00:17:42.699 rmmod nvme_keyring 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2944326 ']' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2944326 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2944326 ']' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2944326 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944326 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944326' 00:17:42.699 killing process with pid 2944326 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2944326 00:17:42.699 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2944326 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.074 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.975 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:45.975 00:17:45.975 real 0m10.455s 00:17:45.975 user 0m26.592s 00:17:45.975 sys 0m2.613s 00:17:45.975 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.975 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:45.975 ************************************ 00:17:45.975 END TEST nvmf_invalid 00:17:45.975 ************************************ 00:17:45.975 18:24:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:45.975 18:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.976 ************************************ 00:17:45.976 START TEST nvmf_connect_stress 00:17:45.976 ************************************ 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:45.976 * Looking for test storage... 00:17:45.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.976 --rc genhtml_branch_coverage=1 00:17:45.976 --rc genhtml_function_coverage=1 00:17:45.976 --rc genhtml_legend=1 00:17:45.976 --rc geninfo_all_blocks=1 00:17:45.976 --rc geninfo_unexecuted_blocks=1 00:17:45.976 00:17:45.976 ' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.976 --rc genhtml_branch_coverage=1 00:17:45.976 --rc genhtml_function_coverage=1 00:17:45.976 --rc genhtml_legend=1 00:17:45.976 --rc geninfo_all_blocks=1 00:17:45.976 --rc geninfo_unexecuted_blocks=1 00:17:45.976 00:17:45.976 ' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.976 --rc genhtml_branch_coverage=1 00:17:45.976 --rc genhtml_function_coverage=1 00:17:45.976 --rc genhtml_legend=1 00:17:45.976 --rc geninfo_all_blocks=1 00:17:45.976 --rc geninfo_unexecuted_blocks=1 00:17:45.976 00:17:45.976 ' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:45.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.976 --rc genhtml_branch_coverage=1 00:17:45.976 --rc genhtml_function_coverage=1 00:17:45.976 --rc genhtml_legend=1 00:17:45.976 --rc geninfo_all_blocks=1 00:17:45.976 --rc geninfo_unexecuted_blocks=1 00:17:45.976 00:17:45.976 ' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.976 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:45.977 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.508 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:17:48.509 00:17:48.509 --- 10.0.0.2 ping statistics --- 00:17:48.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.509 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:48.509 00:17:48.509 --- 10.0.0.1 ping statistics --- 00:17:48.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.509 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2947099 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2947099 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2947099 ']' 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.509 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.510 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.510 [2024-11-18 18:24:46.599962] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:48.510 [2024-11-18 18:24:46.600110] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.510 [2024-11-18 18:24:46.743200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.768 [2024-11-18 18:24:46.879374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.768 [2024-11-18 18:24:46.879454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.768 [2024-11-18 18:24:46.879480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.768 [2024-11-18 18:24:46.879504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.768 [2024-11-18 18:24:46.879524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.768 [2024-11-18 18:24:46.882267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.768 [2024-11-18 18:24:46.882357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.768 [2024-11-18 18:24:46.882362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 [2024-11-18 18:24:47.647319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.334 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 [2024-11-18 18:24:47.667719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.593 NULL1 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2947252 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.593 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.594 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.851 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.852 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:49.852 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.852 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.852 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.110 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:50.110 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.110 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.110 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:50.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.675 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.932 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.932 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:50.932 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.932 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.932 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.190 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.190 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:51.190 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.190 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.190 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.447 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.447 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:51.447 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.447 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.447 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.704 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.704 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:51.704 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.704 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.704 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.269 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.269 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:52.269 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.269 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.269 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:52.527 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.527 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.527 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.784 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.784 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:52.784 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.784 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.784 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.041 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.041 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:53.041 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.041 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.041 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.299 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.299 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:53.299 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.299 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.299 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.864 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.864 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:53.864 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.864 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.864 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.122 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.122 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:54.122 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.122 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.122 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.379 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.379 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:54.379 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.379 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.379 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.637 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.637 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:54.637 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.637 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.637 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.202 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.202 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:55.202 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.202 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.202 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.460 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.460 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:55.460 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.460 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.460 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.717 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.717 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:55.717 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.717 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.717 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.975 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.975 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:55.975 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.975 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.975 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.233 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.233 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:56.233 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.233 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.233 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.798 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.798 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:56.798 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.798 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.798 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.056 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.056 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:57.056 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.056 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.056 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.314 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.314 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:57.314 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.314 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.314 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.572 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.572 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:57.572 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.572 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.572 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.137 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.137 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:58.137 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.137 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.137 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.395 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.395 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:58.395 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.395 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.395 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.652 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.652 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:58.652 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.652 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.652 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.910 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:58.910 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.910 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.910 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.168 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:59.168 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.168 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.168 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.733 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.733 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:59.733 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.734 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.734 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.734 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2947252 00:17:59.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2947252) - No such process 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2947252 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.992 rmmod nvme_tcp 00:17:59.992 rmmod nvme_fabrics 00:17:59.992 rmmod nvme_keyring 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2947099 ']' 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2947099 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2947099 ']' 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2947099 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947099 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947099' 00:17:59.992 killing process with pid 2947099 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2947099 00:17:59.992 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2947099 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.366 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.269 00:18:03.269 real 0m17.238s 00:18:03.269 user 0m43.110s 00:18:03.269 sys 0m6.031s 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.269 ************************************ 00:18:03.269 END TEST nvmf_connect_stress 00:18:03.269 ************************************ 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.269 ************************************ 00:18:03.269 START TEST nvmf_fused_ordering 00:18:03.269 ************************************ 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:03.269 * Looking for test storage... 00:18:03.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.269 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:03.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.269 --rc genhtml_branch_coverage=1 00:18:03.270 --rc genhtml_function_coverage=1 00:18:03.270 --rc genhtml_legend=1 00:18:03.270 --rc geninfo_all_blocks=1 00:18:03.270 --rc geninfo_unexecuted_blocks=1 00:18:03.270 00:18:03.270 ' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.270 --rc genhtml_branch_coverage=1 00:18:03.270 --rc genhtml_function_coverage=1 00:18:03.270 --rc genhtml_legend=1 00:18:03.270 --rc geninfo_all_blocks=1 00:18:03.270 --rc geninfo_unexecuted_blocks=1 00:18:03.270 00:18:03.270 ' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.270 --rc genhtml_branch_coverage=1 00:18:03.270 --rc genhtml_function_coverage=1 00:18:03.270 --rc genhtml_legend=1 00:18:03.270 --rc geninfo_all_blocks=1 00:18:03.270 --rc geninfo_unexecuted_blocks=1 00:18:03.270 00:18:03.270 ' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.270 --rc genhtml_branch_coverage=1 00:18:03.270 --rc genhtml_function_coverage=1 00:18:03.270 --rc genhtml_legend=1 00:18:03.270 --rc geninfo_all_blocks=1 00:18:03.270 --rc geninfo_unexecuted_blocks=1 00:18:03.270 00:18:03.270 ' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.270 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.801 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:18:05.802 00:18:05.802 --- 10.0.0.2 ping statistics --- 00:18:05.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.802 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:18:05.802 00:18:05.802 --- 10.0.0.1 ping statistics --- 00:18:05.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.802 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2950635 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2950635 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2950635 ']' 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.802 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.802 [2024-11-18 18:25:03.913412] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:05.802 [2024-11-18 18:25:03.913553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.802 [2024-11-18 18:25:04.066125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.061 [2024-11-18 18:25:04.206489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.061 [2024-11-18 18:25:04.206558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.061 [2024-11-18 18:25:04.206579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.061 [2024-11-18 18:25:04.206630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.061 [2024-11-18 18:25:04.206674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.061 [2024-11-18 18:25:04.208134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.633 [2024-11-18 18:25:04.931278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.633 [2024-11-18 18:25:04.947524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.633 NULL1 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.633 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.938 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:06.938 [2024-11-18 18:25:05.022170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:06.938 [2024-11-18 18:25:05.022279] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950796 ] 00:18:07.532 Attached to nqn.2016-06.io.spdk:cnode1 00:18:07.532 Namespace ID: 1 size: 1GB 00:18:07.532 fused_ordering(0) 00:18:07.532 fused_ordering(1) 00:18:07.532 fused_ordering(2) 00:18:07.532 fused_ordering(3) 00:18:07.532 fused_ordering(4) 00:18:07.532 fused_ordering(5) 00:18:07.532 fused_ordering(6) 00:18:07.532 fused_ordering(7) 00:18:07.532 fused_ordering(8) 00:18:07.532 fused_ordering(9) 00:18:07.532 fused_ordering(10) 00:18:07.532 fused_ordering(11) 00:18:07.532 fused_ordering(12) 00:18:07.532 fused_ordering(13) 00:18:07.532 fused_ordering(14) 00:18:07.532 fused_ordering(15) 00:18:07.532 fused_ordering(16) 00:18:07.532 fused_ordering(17) 00:18:07.532 fused_ordering(18) 00:18:07.532 fused_ordering(19) 00:18:07.532 fused_ordering(20) 00:18:07.532 fused_ordering(21) 00:18:07.532 fused_ordering(22) 00:18:07.532 fused_ordering(23) 00:18:07.532 fused_ordering(24) 00:18:07.532 fused_ordering(25) 00:18:07.532 fused_ordering(26) 00:18:07.532 fused_ordering(27) 00:18:07.532 fused_ordering(28) 00:18:07.532 fused_ordering(29) 00:18:07.532 fused_ordering(30) 00:18:07.532 fused_ordering(31) 00:18:07.532 fused_ordering(32) 00:18:07.532 fused_ordering(33) 00:18:07.532 fused_ordering(34) 00:18:07.532 fused_ordering(35) 00:18:07.532 fused_ordering(36) 00:18:07.532 fused_ordering(37) 00:18:07.532 fused_ordering(38) 00:18:07.532 fused_ordering(39) 00:18:07.532 fused_ordering(40) 00:18:07.532 fused_ordering(41) 00:18:07.532 fused_ordering(42) 00:18:07.532 fused_ordering(43) 00:18:07.532 fused_ordering(44) 00:18:07.532 fused_ordering(45) 00:18:07.532 fused_ordering(46) 00:18:07.532 fused_ordering(47) 00:18:07.532 fused_ordering(48) 00:18:07.532 fused_ordering(49) 00:18:07.532 fused_ordering(50) 00:18:07.532 fused_ordering(51) 00:18:07.532 fused_ordering(52) 00:18:07.532 fused_ordering(53) 00:18:07.532 fused_ordering(54) 00:18:07.532 fused_ordering(55) 00:18:07.532 fused_ordering(56) 00:18:07.532 fused_ordering(57) 00:18:07.532 fused_ordering(58) 00:18:07.532 fused_ordering(59) 00:18:07.532 fused_ordering(60) 00:18:07.532 fused_ordering(61) 00:18:07.532 fused_ordering(62) 00:18:07.532 fused_ordering(63) 00:18:07.532 fused_ordering(64) 00:18:07.533 fused_ordering(65) 00:18:07.533 fused_ordering(66) 00:18:07.533 fused_ordering(67) 00:18:07.533 fused_ordering(68) 00:18:07.533 fused_ordering(69) 00:18:07.533 fused_ordering(70) 00:18:07.533 fused_ordering(71) 00:18:07.533 fused_ordering(72) 00:18:07.533 fused_ordering(73) 00:18:07.533 fused_ordering(74) 00:18:07.533 fused_ordering(75) 00:18:07.533 fused_ordering(76) 00:18:07.533 fused_ordering(77) 00:18:07.533 fused_ordering(78) 00:18:07.533 fused_ordering(79) 00:18:07.533 fused_ordering(80) 00:18:07.533 fused_ordering(81) 00:18:07.533 fused_ordering(82) 00:18:07.533 fused_ordering(83) 00:18:07.533 fused_ordering(84) 00:18:07.533 fused_ordering(85) 00:18:07.533 fused_ordering(86) 00:18:07.533 fused_ordering(87) 00:18:07.533 fused_ordering(88) 00:18:07.533 fused_ordering(89) 00:18:07.533 fused_ordering(90) 00:18:07.533 fused_ordering(91) 00:18:07.533 fused_ordering(92) 00:18:07.533 fused_ordering(93) 00:18:07.533 fused_ordering(94) 00:18:07.533 fused_ordering(95) 00:18:07.533 fused_ordering(96) 00:18:07.533 fused_ordering(97) 00:18:07.533 fused_ordering(98) 00:18:07.533 fused_ordering(99) 00:18:07.533 fused_ordering(100) 00:18:07.533 fused_ordering(101) 00:18:07.533 fused_ordering(102) 00:18:07.533 fused_ordering(103) 00:18:07.533 fused_ordering(104) 00:18:07.533 fused_ordering(105) 00:18:07.533 fused_ordering(106) 00:18:07.533 fused_ordering(107) 00:18:07.533 fused_ordering(108) 00:18:07.533 fused_ordering(109) 00:18:07.533 fused_ordering(110) 00:18:07.533 fused_ordering(111) 00:18:07.533 fused_ordering(112) 00:18:07.533 fused_ordering(113) 00:18:07.533 fused_ordering(114) 00:18:07.533 fused_ordering(115) 00:18:07.533 fused_ordering(116) 00:18:07.533 fused_ordering(117) 00:18:07.533 fused_ordering(118) 00:18:07.533 fused_ordering(119) 00:18:07.533 fused_ordering(120) 00:18:07.533 fused_ordering(121) 00:18:07.533 fused_ordering(122) 00:18:07.533 fused_ordering(123) 00:18:07.533 fused_ordering(124) 00:18:07.533 fused_ordering(125) 00:18:07.533 fused_ordering(126) 00:18:07.533 fused_ordering(127) 00:18:07.533 fused_ordering(128) 00:18:07.533 fused_ordering(129) 00:18:07.533 fused_ordering(130) 00:18:07.533 fused_ordering(131) 00:18:07.533 fused_ordering(132) 00:18:07.533 fused_ordering(133) 00:18:07.533 fused_ordering(134) 00:18:07.533 fused_ordering(135) 00:18:07.533 fused_ordering(136) 00:18:07.533 fused_ordering(137) 00:18:07.533 fused_ordering(138) 00:18:07.533 fused_ordering(139) 00:18:07.533 fused_ordering(140) 00:18:07.533 fused_ordering(141) 00:18:07.533 fused_ordering(142) 00:18:07.533 fused_ordering(143) 00:18:07.533 fused_ordering(144) 00:18:07.533 fused_ordering(145) 00:18:07.533 fused_ordering(146) 00:18:07.533 fused_ordering(147) 00:18:07.533 fused_ordering(148) 00:18:07.533 fused_ordering(149) 00:18:07.533 fused_ordering(150) 00:18:07.533 fused_ordering(151) 00:18:07.533 fused_ordering(152) 00:18:07.533 fused_ordering(153) 00:18:07.533 fused_ordering(154) 00:18:07.533 fused_ordering(155) 00:18:07.533 fused_ordering(156) 00:18:07.533 fused_ordering(157) 00:18:07.533 fused_ordering(158) 00:18:07.533 fused_ordering(159) 00:18:07.533 fused_ordering(160) 00:18:07.533 fused_ordering(161) 00:18:07.533 fused_ordering(162) 00:18:07.533 fused_ordering(163) 00:18:07.533 fused_ordering(164) 00:18:07.533 fused_ordering(165) 00:18:07.533 fused_ordering(166) 00:18:07.533 fused_ordering(167) 00:18:07.533 fused_ordering(168) 00:18:07.533 fused_ordering(169) 00:18:07.533 fused_ordering(170) 00:18:07.533 fused_ordering(171) 00:18:07.533 fused_ordering(172) 00:18:07.533 fused_ordering(173) 00:18:07.533 fused_ordering(174) 00:18:07.533 fused_ordering(175) 00:18:07.533 fused_ordering(176) 00:18:07.533 fused_ordering(177) 00:18:07.533 fused_ordering(178) 00:18:07.533 fused_ordering(179) 00:18:07.533 fused_ordering(180) 00:18:07.533 fused_ordering(181) 00:18:07.533 fused_ordering(182) 00:18:07.533 fused_ordering(183) 00:18:07.533 fused_ordering(184) 00:18:07.533 fused_ordering(185) 00:18:07.533 fused_ordering(186) 00:18:07.533 fused_ordering(187) 00:18:07.533 fused_ordering(188) 00:18:07.533 fused_ordering(189) 00:18:07.533 fused_ordering(190) 00:18:07.533 fused_ordering(191) 00:18:07.533 fused_ordering(192) 00:18:07.533 fused_ordering(193) 00:18:07.533 fused_ordering(194) 00:18:07.533 fused_ordering(195) 00:18:07.533 fused_ordering(196) 00:18:07.533 fused_ordering(197) 00:18:07.533 fused_ordering(198) 00:18:07.533 fused_ordering(199) 00:18:07.533 fused_ordering(200) 00:18:07.533 fused_ordering(201) 00:18:07.533 fused_ordering(202) 00:18:07.533 fused_ordering(203) 00:18:07.533 fused_ordering(204) 00:18:07.533 fused_ordering(205) 00:18:07.792 fused_ordering(206) 00:18:07.792 fused_ordering(207) 00:18:07.792 fused_ordering(208) 00:18:07.792 fused_ordering(209) 00:18:07.792 fused_ordering(210) 00:18:07.792 fused_ordering(211) 00:18:07.792 fused_ordering(212) 00:18:07.792 fused_ordering(213) 00:18:07.792 fused_ordering(214) 00:18:07.792 fused_ordering(215) 00:18:07.792 fused_ordering(216) 00:18:07.792 fused_ordering(217) 00:18:07.792 fused_ordering(218) 00:18:07.792 fused_ordering(219) 00:18:07.792 fused_ordering(220) 00:18:07.792 fused_ordering(221) 00:18:07.792 fused_ordering(222) 00:18:07.792 fused_ordering(223) 00:18:07.792 fused_ordering(224) 00:18:07.792 fused_ordering(225) 00:18:07.792 fused_ordering(226) 00:18:07.792 fused_ordering(227) 00:18:07.792 fused_ordering(228) 00:18:07.792 fused_ordering(229) 00:18:07.792 fused_ordering(230) 00:18:07.792 fused_ordering(231) 00:18:07.792 fused_ordering(232) 00:18:07.792 fused_ordering(233) 00:18:07.792 fused_ordering(234) 00:18:07.792 fused_ordering(235) 00:18:07.792 fused_ordering(236) 00:18:07.792 fused_ordering(237) 00:18:07.792 fused_ordering(238) 00:18:07.792 fused_ordering(239) 00:18:07.792 fused_ordering(240) 00:18:07.792 fused_ordering(241) 00:18:07.792 fused_ordering(242) 00:18:07.792 fused_ordering(243) 00:18:07.792 fused_ordering(244) 00:18:07.792 fused_ordering(245) 00:18:07.792 fused_ordering(246) 00:18:07.792 fused_ordering(247) 00:18:07.792 fused_ordering(248) 00:18:07.792 fused_ordering(249) 00:18:07.792 fused_ordering(250) 00:18:07.792 fused_ordering(251) 00:18:07.792 fused_ordering(252) 00:18:07.792 fused_ordering(253) 00:18:07.792 fused_ordering(254) 00:18:07.792 fused_ordering(255) 00:18:07.792 fused_ordering(256) 00:18:07.792 fused_ordering(257) 00:18:07.792 fused_ordering(258) 00:18:07.792 fused_ordering(259) 00:18:07.792 fused_ordering(260) 00:18:07.792 fused_ordering(261) 00:18:07.792 fused_ordering(262) 00:18:07.792 fused_ordering(263) 00:18:07.792 fused_ordering(264) 00:18:07.792 fused_ordering(265) 00:18:07.792 fused_ordering(266) 00:18:07.792 fused_ordering(267) 00:18:07.792 fused_ordering(268) 00:18:07.792 fused_ordering(269) 00:18:07.792 fused_ordering(270) 00:18:07.792 fused_ordering(271) 00:18:07.792 fused_ordering(272) 00:18:07.792 fused_ordering(273) 00:18:07.792 fused_ordering(274) 00:18:07.792 fused_ordering(275) 00:18:07.792 fused_ordering(276) 00:18:07.792 fused_ordering(277) 00:18:07.792 fused_ordering(278) 00:18:07.792 fused_ordering(279) 00:18:07.792 fused_ordering(280) 00:18:07.792 fused_ordering(281) 00:18:07.792 fused_ordering(282) 00:18:07.792 fused_ordering(283) 00:18:07.792 fused_ordering(284) 00:18:07.792 fused_ordering(285) 00:18:07.792 fused_ordering(286) 00:18:07.792 fused_ordering(287) 00:18:07.792 fused_ordering(288) 00:18:07.792 fused_ordering(289) 00:18:07.792 fused_ordering(290) 00:18:07.792 fused_ordering(291) 00:18:07.792 fused_ordering(292) 00:18:07.792 fused_ordering(293) 00:18:07.792 fused_ordering(294) 00:18:07.792 fused_ordering(295) 00:18:07.792 fused_ordering(296) 00:18:07.792 fused_ordering(297) 00:18:07.792 fused_ordering(298) 00:18:07.792 fused_ordering(299) 00:18:07.792 fused_ordering(300) 00:18:07.792 fused_ordering(301) 00:18:07.792 fused_ordering(302) 00:18:07.792 fused_ordering(303) 00:18:07.792 fused_ordering(304) 00:18:07.792 fused_ordering(305) 00:18:07.792 fused_ordering(306) 00:18:07.792 fused_ordering(307) 00:18:07.792 fused_ordering(308) 00:18:07.792 fused_ordering(309) 00:18:07.792 fused_ordering(310) 00:18:07.792 fused_ordering(311) 00:18:07.792 fused_ordering(312) 00:18:07.792 fused_ordering(313) 00:18:07.792 fused_ordering(314) 00:18:07.792 fused_ordering(315) 00:18:07.792 fused_ordering(316) 00:18:07.792 fused_ordering(317) 00:18:07.792 fused_ordering(318) 00:18:07.792 fused_ordering(319) 00:18:07.792 fused_ordering(320) 00:18:07.792 fused_ordering(321) 00:18:07.792 fused_ordering(322) 00:18:07.792 fused_ordering(323) 00:18:07.792 fused_ordering(324) 00:18:07.792 fused_ordering(325) 00:18:07.792 fused_ordering(326) 00:18:07.792 fused_ordering(327) 00:18:07.792 fused_ordering(328) 00:18:07.792 fused_ordering(329) 00:18:07.792 fused_ordering(330) 00:18:07.792 fused_ordering(331) 00:18:07.792 fused_ordering(332) 00:18:07.792 fused_ordering(333) 00:18:07.792 fused_ordering(334) 00:18:07.792 fused_ordering(335) 00:18:07.792 fused_ordering(336) 00:18:07.792 fused_ordering(337) 00:18:07.792 fused_ordering(338) 00:18:07.792 fused_ordering(339) 00:18:07.792 fused_ordering(340) 00:18:07.792 fused_ordering(341) 00:18:07.792 fused_ordering(342) 00:18:07.792 fused_ordering(343) 00:18:07.792 fused_ordering(344) 00:18:07.792 fused_ordering(345) 00:18:07.792 fused_ordering(346) 00:18:07.792 fused_ordering(347) 00:18:07.792 fused_ordering(348) 00:18:07.792 fused_ordering(349) 00:18:07.792 fused_ordering(350) 00:18:07.792 fused_ordering(351) 00:18:07.792 fused_ordering(352) 00:18:07.792 fused_ordering(353) 00:18:07.792 fused_ordering(354) 00:18:07.792 fused_ordering(355) 00:18:07.792 fused_ordering(356) 00:18:07.792 fused_ordering(357) 00:18:07.792 fused_ordering(358) 00:18:07.792 fused_ordering(359) 00:18:07.792 fused_ordering(360) 00:18:07.792 fused_ordering(361) 00:18:07.792 fused_ordering(362) 00:18:07.792 fused_ordering(363) 00:18:07.792 fused_ordering(364) 00:18:07.792 fused_ordering(365) 00:18:07.792 fused_ordering(366) 00:18:07.792 fused_ordering(367) 00:18:07.792 fused_ordering(368) 00:18:07.792 fused_ordering(369) 00:18:07.792 fused_ordering(370) 00:18:07.792 fused_ordering(371) 00:18:07.792 fused_ordering(372) 00:18:07.792 fused_ordering(373) 00:18:07.792 fused_ordering(374) 00:18:07.792 fused_ordering(375) 00:18:07.792 fused_ordering(376) 00:18:07.792 fused_ordering(377) 00:18:07.792 fused_ordering(378) 00:18:07.792 fused_ordering(379) 00:18:07.792 fused_ordering(380) 00:18:07.792 fused_ordering(381) 00:18:07.792 fused_ordering(382) 00:18:07.792 fused_ordering(383) 00:18:07.792 fused_ordering(384) 00:18:07.792 fused_ordering(385) 00:18:07.792 fused_ordering(386) 00:18:07.792 fused_ordering(387) 00:18:07.792 fused_ordering(388) 00:18:07.792 fused_ordering(389) 00:18:07.792 fused_ordering(390) 00:18:07.792 fused_ordering(391) 00:18:07.792 fused_ordering(392) 00:18:07.792 fused_ordering(393) 00:18:07.792 fused_ordering(394) 00:18:07.792 fused_ordering(395) 00:18:07.792 fused_ordering(396) 00:18:07.792 fused_ordering(397) 00:18:07.792 fused_ordering(398) 00:18:07.792 fused_ordering(399) 00:18:07.792 fused_ordering(400) 00:18:07.792 fused_ordering(401) 00:18:07.792 fused_ordering(402) 00:18:07.792 fused_ordering(403) 00:18:07.792 fused_ordering(404) 00:18:07.792 fused_ordering(405) 00:18:07.792 fused_ordering(406) 00:18:07.792 fused_ordering(407) 00:18:07.792 fused_ordering(408) 00:18:07.792 fused_ordering(409) 00:18:07.792 fused_ordering(410) 00:18:08.726 fused_ordering(411) 00:18:08.726 fused_ordering(412) 00:18:08.726 fused_ordering(413) 00:18:08.726 fused_ordering(414) 00:18:08.726 fused_ordering(415) 00:18:08.726 fused_ordering(416) 00:18:08.726 fused_ordering(417) 00:18:08.726 fused_ordering(418) 00:18:08.726 fused_ordering(419) 00:18:08.726 fused_ordering(420) 00:18:08.726 fused_ordering(421) 00:18:08.726 fused_ordering(422) 00:18:08.726 fused_ordering(423) 00:18:08.726 fused_ordering(424) 00:18:08.726 fused_ordering(425) 00:18:08.726 fused_ordering(426) 00:18:08.726 fused_ordering(427) 00:18:08.726 fused_ordering(428) 00:18:08.726 fused_ordering(429) 00:18:08.726 fused_ordering(430) 00:18:08.726 fused_ordering(431) 00:18:08.726 fused_ordering(432) 00:18:08.726 fused_ordering(433) 00:18:08.726 fused_ordering(434) 00:18:08.726 fused_ordering(435) 00:18:08.726 fused_ordering(436) 00:18:08.726 fused_ordering(437) 00:18:08.726 fused_ordering(438) 00:18:08.726 fused_ordering(439) 00:18:08.726 fused_ordering(440) 00:18:08.726 fused_ordering(441) 00:18:08.726 fused_ordering(442) 00:18:08.726 fused_ordering(443) 00:18:08.726 fused_ordering(444) 00:18:08.726 fused_ordering(445) 00:18:08.726 fused_ordering(446) 00:18:08.726 fused_ordering(447) 00:18:08.726 fused_ordering(448) 00:18:08.726 fused_ordering(449) 00:18:08.726 fused_ordering(450) 00:18:08.726 fused_ordering(451) 00:18:08.726 fused_ordering(452) 00:18:08.726 fused_ordering(453) 00:18:08.726 fused_ordering(454) 00:18:08.726 fused_ordering(455) 00:18:08.726 fused_ordering(456) 00:18:08.726 fused_ordering(457) 00:18:08.726 fused_ordering(458) 00:18:08.726 fused_ordering(459) 00:18:08.726 fused_ordering(460) 00:18:08.726 fused_ordering(461) 00:18:08.726 fused_ordering(462) 00:18:08.726 fused_ordering(463) 00:18:08.726 fused_ordering(464) 00:18:08.726 fused_ordering(465) 00:18:08.726 fused_ordering(466) 00:18:08.726 fused_ordering(467) 00:18:08.726 fused_ordering(468) 00:18:08.726 fused_ordering(469) 00:18:08.726 fused_ordering(470) 00:18:08.726 fused_ordering(471) 00:18:08.726 fused_ordering(472) 00:18:08.726 fused_ordering(473) 00:18:08.726 fused_ordering(474) 00:18:08.726 fused_ordering(475) 00:18:08.726 fused_ordering(476) 00:18:08.726 fused_ordering(477) 00:18:08.726 fused_ordering(478) 00:18:08.726 fused_ordering(479) 00:18:08.726 fused_ordering(480) 00:18:08.726 fused_ordering(481) 00:18:08.726 fused_ordering(482) 00:18:08.727 fused_ordering(483) 00:18:08.727 fused_ordering(484) 00:18:08.727 fused_ordering(485) 00:18:08.727 fused_ordering(486) 00:18:08.727 fused_ordering(487) 00:18:08.727 fused_ordering(488) 00:18:08.727 fused_ordering(489) 00:18:08.727 fused_ordering(490) 00:18:08.727 fused_ordering(491) 00:18:08.727 fused_ordering(492) 00:18:08.727 fused_ordering(493) 00:18:08.727 fused_ordering(494) 00:18:08.727 fused_ordering(495) 00:18:08.727 fused_ordering(496) 00:18:08.727 fused_ordering(497) 00:18:08.727 fused_ordering(498) 00:18:08.727 fused_ordering(499) 00:18:08.727 fused_ordering(500) 00:18:08.727 fused_ordering(501) 00:18:08.727 fused_ordering(502) 00:18:08.727 fused_ordering(503) 00:18:08.727 fused_ordering(504) 00:18:08.727 fused_ordering(505) 00:18:08.727 fused_ordering(506) 00:18:08.727 fused_ordering(507) 00:18:08.727 fused_ordering(508) 00:18:08.727 fused_ordering(509) 00:18:08.727 fused_ordering(510) 00:18:08.727 fused_ordering(511) 00:18:08.727 fused_ordering(512) 00:18:08.727 fused_ordering(513) 00:18:08.727 fused_ordering(514) 00:18:08.727 fused_ordering(515) 00:18:08.727 fused_ordering(516) 00:18:08.727 fused_ordering(517) 00:18:08.727 fused_ordering(518) 00:18:08.727 fused_ordering(519) 00:18:08.727 fused_ordering(520) 00:18:08.727 fused_ordering(521) 00:18:08.727 fused_ordering(522) 00:18:08.727 fused_ordering(523) 00:18:08.727 fused_ordering(524) 00:18:08.727 fused_ordering(525) 00:18:08.727 fused_ordering(526) 00:18:08.727 fused_ordering(527) 00:18:08.727 fused_ordering(528) 00:18:08.727 fused_ordering(529) 00:18:08.727 fused_ordering(530) 00:18:08.727 fused_ordering(531) 00:18:08.727 fused_ordering(532) 00:18:08.727 fused_ordering(533) 00:18:08.727 fused_ordering(534) 00:18:08.727 fused_ordering(535) 00:18:08.727 fused_ordering(536) 00:18:08.727 fused_ordering(537) 00:18:08.727 fused_ordering(538) 00:18:08.727 fused_ordering(539) 00:18:08.727 fused_ordering(540) 00:18:08.727 fused_ordering(541) 00:18:08.727 fused_ordering(542) 00:18:08.727 fused_ordering(543) 00:18:08.727 fused_ordering(544) 00:18:08.727 fused_ordering(545) 00:18:08.727 fused_ordering(546) 00:18:08.727 fused_ordering(547) 00:18:08.727 fused_ordering(548) 00:18:08.727 fused_ordering(549) 00:18:08.727 fused_ordering(550) 00:18:08.727 fused_ordering(551) 00:18:08.727 fused_ordering(552) 00:18:08.727 fused_ordering(553) 00:18:08.727 fused_ordering(554) 00:18:08.727 fused_ordering(555) 00:18:08.727 fused_ordering(556) 00:18:08.727 fused_ordering(557) 00:18:08.727 fused_ordering(558) 00:18:08.727 fused_ordering(559) 00:18:08.727 fused_ordering(560) 00:18:08.727 fused_ordering(561) 00:18:08.727 fused_ordering(562) 00:18:08.727 fused_ordering(563) 00:18:08.727 fused_ordering(564) 00:18:08.727 fused_ordering(565) 00:18:08.727 fused_ordering(566) 00:18:08.727 fused_ordering(567) 00:18:08.727 fused_ordering(568) 00:18:08.727 fused_ordering(569) 00:18:08.727 fused_ordering(570) 00:18:08.727 fused_ordering(571) 00:18:08.727 fused_ordering(572) 00:18:08.727 fused_ordering(573) 00:18:08.727 fused_ordering(574) 00:18:08.727 fused_ordering(575) 00:18:08.727 fused_ordering(576) 00:18:08.727 fused_ordering(577) 00:18:08.727 fused_ordering(578) 00:18:08.727 fused_ordering(579) 00:18:08.727 fused_ordering(580) 00:18:08.727 fused_ordering(581) 00:18:08.727 fused_ordering(582) 00:18:08.727 fused_ordering(583) 00:18:08.727 fused_ordering(584) 00:18:08.727 fused_ordering(585) 00:18:08.727 fused_ordering(586) 00:18:08.727 fused_ordering(587) 00:18:08.727 fused_ordering(588) 00:18:08.727 fused_ordering(589) 00:18:08.727 fused_ordering(590) 00:18:08.727 fused_ordering(591) 00:18:08.727 fused_ordering(592) 00:18:08.727 fused_ordering(593) 00:18:08.727 fused_ordering(594) 00:18:08.727 fused_ordering(595) 00:18:08.727 fused_ordering(596) 00:18:08.727 fused_ordering(597) 00:18:08.727 fused_ordering(598) 00:18:08.727 fused_ordering(599) 00:18:08.727 fused_ordering(600) 00:18:08.727 fused_ordering(601) 00:18:08.727 fused_ordering(602) 00:18:08.727 fused_ordering(603) 00:18:08.727 fused_ordering(604) 00:18:08.727 fused_ordering(605) 00:18:08.727 fused_ordering(606) 00:18:08.727 fused_ordering(607) 00:18:08.727 fused_ordering(608) 00:18:08.727 fused_ordering(609) 00:18:08.727 fused_ordering(610) 00:18:08.727 fused_ordering(611) 00:18:08.727 fused_ordering(612) 00:18:08.727 fused_ordering(613) 00:18:08.727 fused_ordering(614) 00:18:08.727 fused_ordering(615) 00:18:09.296 fused_ordering(616) 00:18:09.296 fused_ordering(617) 00:18:09.296 fused_ordering(618) 00:18:09.296 fused_ordering(619) 00:18:09.296 fused_ordering(620) 00:18:09.296 fused_ordering(621) 00:18:09.296 fused_ordering(622) 00:18:09.296 fused_ordering(623) 00:18:09.296 fused_ordering(624) 00:18:09.296 fused_ordering(625) 00:18:09.296 fused_ordering(626) 00:18:09.296 fused_ordering(627) 00:18:09.296 fused_ordering(628) 00:18:09.296 fused_ordering(629) 00:18:09.296 fused_ordering(630) 00:18:09.296 fused_ordering(631) 00:18:09.296 fused_ordering(632) 00:18:09.296 fused_ordering(633) 00:18:09.296 fused_ordering(634) 00:18:09.296 fused_ordering(635) 00:18:09.296 fused_ordering(636) 00:18:09.296 fused_ordering(637) 00:18:09.296 fused_ordering(638) 00:18:09.296 fused_ordering(639) 00:18:09.296 fused_ordering(640) 00:18:09.296 fused_ordering(641) 00:18:09.297 fused_ordering(642) 00:18:09.297 fused_ordering(643) 00:18:09.297 fused_ordering(644) 00:18:09.297 fused_ordering(645) 00:18:09.297 fused_ordering(646) 00:18:09.297 fused_ordering(647) 00:18:09.297 fused_ordering(648) 00:18:09.297 fused_ordering(649) 00:18:09.297 fused_ordering(650) 00:18:09.297 fused_ordering(651) 00:18:09.297 fused_ordering(652) 00:18:09.297 fused_ordering(653) 00:18:09.297 fused_ordering(654) 00:18:09.297 fused_ordering(655) 00:18:09.297 fused_ordering(656) 00:18:09.297 fused_ordering(657) 00:18:09.297 fused_ordering(658) 00:18:09.297 fused_ordering(659) 00:18:09.297 fused_ordering(660) 00:18:09.297 fused_ordering(661) 00:18:09.297 fused_ordering(662) 00:18:09.297 fused_ordering(663) 00:18:09.297 fused_ordering(664) 00:18:09.297 fused_ordering(665) 00:18:09.297 fused_ordering(666) 00:18:09.297 fused_ordering(667) 00:18:09.297 fused_ordering(668) 00:18:09.297 fused_ordering(669) 00:18:09.297 fused_ordering(670) 00:18:09.297 fused_ordering(671) 00:18:09.297 fused_ordering(672) 00:18:09.297 fused_ordering(673) 00:18:09.297 fused_ordering(674) 00:18:09.297 fused_ordering(675) 00:18:09.297 fused_ordering(676) 00:18:09.297 fused_ordering(677) 00:18:09.297 fused_ordering(678) 00:18:09.297 fused_ordering(679) 00:18:09.297 fused_ordering(680) 00:18:09.297 fused_ordering(681) 00:18:09.297 fused_ordering(682) 00:18:09.297 fused_ordering(683) 00:18:09.297 fused_ordering(684) 00:18:09.297 fused_ordering(685) 00:18:09.297 fused_ordering(686) 00:18:09.297 fused_ordering(687) 00:18:09.297 fused_ordering(688) 00:18:09.297 fused_ordering(689) 00:18:09.297 fused_ordering(690) 00:18:09.297 fused_ordering(691) 00:18:09.297 fused_ordering(692) 00:18:09.297 fused_ordering(693) 00:18:09.297 fused_ordering(694) 00:18:09.297 fused_ordering(695) 00:18:09.297 fused_ordering(696) 00:18:09.297 fused_ordering(697) 00:18:09.297 fused_ordering(698) 00:18:09.297 fused_ordering(699) 00:18:09.297 fused_ordering(700) 00:18:09.297 fused_ordering(701) 00:18:09.297 fused_ordering(702) 00:18:09.298 fused_ordering(703) 00:18:09.298 fused_ordering(704) 00:18:09.298 fused_ordering(705) 00:18:09.298 fused_ordering(706) 00:18:09.298 fused_ordering(707) 00:18:09.298 fused_ordering(708) 00:18:09.298 fused_ordering(709) 00:18:09.298 fused_ordering(710) 00:18:09.298 fused_ordering(711) 00:18:09.298 fused_ordering(712) 00:18:09.298 fused_ordering(713) 00:18:09.298 fused_ordering(714) 00:18:09.298 fused_ordering(715) 00:18:09.298 fused_ordering(716) 00:18:09.298 fused_ordering(717) 00:18:09.298 fused_ordering(718) 00:18:09.298 fused_ordering(719) 00:18:09.298 fused_ordering(720) 00:18:09.298 fused_ordering(721) 00:18:09.298 fused_ordering(722) 00:18:09.298 fused_ordering(723) 00:18:09.298 fused_ordering(724) 00:18:09.298 fused_ordering(725) 00:18:09.298 fused_ordering(726) 00:18:09.298 fused_ordering(727) 00:18:09.298 fused_ordering(728) 00:18:09.298 fused_ordering(729) 00:18:09.298 fused_ordering(730) 00:18:09.298 fused_ordering(731) 00:18:09.298 fused_ordering(732) 00:18:09.298 fused_ordering(733) 00:18:09.298 fused_ordering(734) 00:18:09.298 fused_ordering(735) 00:18:09.298 fused_ordering(736) 00:18:09.298 fused_ordering(737) 00:18:09.298 fused_ordering(738) 00:18:09.298 fused_ordering(739) 00:18:09.298 fused_ordering(740) 00:18:09.298 fused_ordering(741) 00:18:09.298 fused_ordering(742) 00:18:09.298 fused_ordering(743) 00:18:09.298 fused_ordering(744) 00:18:09.298 fused_ordering(745) 00:18:09.298 fused_ordering(746) 00:18:09.298 fused_ordering(747) 00:18:09.298 fused_ordering(748) 00:18:09.298 fused_ordering(749) 00:18:09.298 fused_ordering(750) 00:18:09.298 fused_ordering(751) 00:18:09.298 fused_ordering(752) 00:18:09.298 fused_ordering(753) 00:18:09.298 fused_ordering(754) 00:18:09.298 fused_ordering(755) 00:18:09.298 fused_ordering(756) 00:18:09.298 fused_ordering(757) 00:18:09.298 fused_ordering(758) 00:18:09.298 fused_ordering(759) 00:18:09.298 fused_ordering(760) 00:18:09.298 fused_ordering(761) 00:18:09.298 fused_ordering(762) 00:18:09.298 fused_ordering(763) 00:18:09.298 fused_ordering(764) 00:18:09.298 fused_ordering(765) 00:18:09.298 fused_ordering(766) 00:18:09.298 fused_ordering(767) 00:18:09.298 fused_ordering(768) 00:18:09.298 fused_ordering(769) 00:18:09.298 fused_ordering(770) 00:18:09.298 fused_ordering(771) 00:18:09.298 fused_ordering(772) 00:18:09.298 fused_ordering(773) 00:18:09.298 fused_ordering(774) 00:18:09.298 fused_ordering(775) 00:18:09.298 fused_ordering(776) 00:18:09.298 fused_ordering(777) 00:18:09.298 fused_ordering(778) 00:18:09.298 fused_ordering(779) 00:18:09.298 fused_ordering(780) 00:18:09.298 fused_ordering(781) 00:18:09.298 fused_ordering(782) 00:18:09.298 fused_ordering(783) 00:18:09.298 fused_ordering(784) 00:18:09.298 fused_ordering(785) 00:18:09.298 fused_ordering(786) 00:18:09.298 fused_ordering(787) 00:18:09.298 fused_ordering(788) 00:18:09.299 fused_ordering(789) 00:18:09.299 fused_ordering(790) 00:18:09.299 fused_ordering(791) 00:18:09.299 fused_ordering(792) 00:18:09.299 fused_ordering(793) 00:18:09.299 fused_ordering(794) 00:18:09.299 fused_ordering(795) 00:18:09.299 fused_ordering(796) 00:18:09.299 fused_ordering(797) 00:18:09.299 fused_ordering(798) 00:18:09.299 fused_ordering(799) 00:18:09.299 fused_ordering(800) 00:18:09.299 fused_ordering(801) 00:18:09.299 fused_ordering(802) 00:18:09.299 fused_ordering(803) 00:18:09.299 fused_ordering(804) 00:18:09.299 fused_ordering(805) 00:18:09.299 fused_ordering(806) 00:18:09.299 fused_ordering(807) 00:18:09.299 fused_ordering(808) 00:18:09.299 fused_ordering(809) 00:18:09.299 fused_ordering(810) 00:18:09.299 fused_ordering(811) 00:18:09.299 fused_ordering(812) 00:18:09.299 fused_ordering(813) 00:18:09.299 fused_ordering(814) 00:18:09.299 fused_ordering(815) 00:18:09.299 fused_ordering(816) 00:18:09.299 fused_ordering(817) 00:18:09.299 fused_ordering(818) 00:18:09.299 fused_ordering(819) 00:18:09.299 fused_ordering(820) 00:18:10.234 fused_ordering(821) 00:18:10.234 fused_ordering(822) 00:18:10.234 fused_ordering(823) 00:18:10.234 fused_ordering(824) 00:18:10.234 fused_ordering(825) 00:18:10.234 fused_ordering(826) 00:18:10.234 fused_ordering(827) 00:18:10.234 fused_ordering(828) 00:18:10.234 fused_ordering(829) 00:18:10.234 fused_ordering(830) 00:18:10.234 fused_ordering(831) 00:18:10.234 fused_ordering(832) 00:18:10.234 fused_ordering(833) 00:18:10.234 fused_ordering(834) 00:18:10.234 fused_ordering(835) 00:18:10.234 fused_ordering(836) 00:18:10.234 fused_ordering(837) 00:18:10.234 fused_ordering(838) 00:18:10.234 fused_ordering(839) 00:18:10.234 fused_ordering(840) 00:18:10.234 fused_ordering(841) 00:18:10.234 fused_ordering(842) 00:18:10.234 fused_ordering(843) 00:18:10.234 fused_ordering(844) 00:18:10.234 fused_ordering(845) 00:18:10.234 fused_ordering(846) 00:18:10.234 fused_ordering(847) 00:18:10.234 fused_ordering(848) 00:18:10.234 fused_ordering(849) 00:18:10.234 fused_ordering(850) 00:18:10.234 fused_ordering(851) 00:18:10.234 fused_ordering(852) 00:18:10.234 fused_ordering(853) 00:18:10.234 fused_ordering(854) 00:18:10.234 fused_ordering(855) 00:18:10.234 fused_ordering(856) 00:18:10.234 fused_ordering(857) 00:18:10.234 fused_ordering(858) 00:18:10.234 fused_ordering(859) 00:18:10.234 fused_ordering(860) 00:18:10.234 fused_ordering(861) 00:18:10.234 fused_ordering(862) 00:18:10.234 fused_ordering(863) 00:18:10.234 fused_ordering(864) 00:18:10.234 fused_ordering(865) 00:18:10.234 fused_ordering(866) 00:18:10.234 fused_ordering(867) 00:18:10.234 fused_ordering(868) 00:18:10.234 fused_ordering(869) 00:18:10.234 fused_ordering(870) 00:18:10.234 fused_ordering(871) 00:18:10.234 fused_ordering(872) 00:18:10.234 fused_ordering(873) 00:18:10.234 fused_ordering(874) 00:18:10.234 fused_ordering(875) 00:18:10.234 fused_ordering(876) 00:18:10.234 fused_ordering(877) 00:18:10.234 fused_ordering(878) 00:18:10.234 fused_ordering(879) 00:18:10.234 fused_ordering(880) 00:18:10.234 fused_ordering(881) 00:18:10.234 fused_ordering(882) 00:18:10.234 fused_ordering(883) 00:18:10.234 fused_ordering(884) 00:18:10.234 fused_ordering(885) 00:18:10.234 fused_ordering(886) 00:18:10.234 fused_ordering(887) 00:18:10.234 fused_ordering(888) 00:18:10.234 fused_ordering(889) 00:18:10.234 fused_ordering(890) 00:18:10.234 fused_ordering(891) 00:18:10.234 fused_ordering(892) 00:18:10.234 fused_ordering(893) 00:18:10.234 fused_ordering(894) 00:18:10.234 fused_ordering(895) 00:18:10.234 fused_ordering(896) 00:18:10.234 fused_ordering(897) 00:18:10.234 fused_ordering(898) 00:18:10.234 fused_ordering(899) 00:18:10.234 fused_ordering(900) 00:18:10.234 fused_ordering(901) 00:18:10.234 fused_ordering(902) 00:18:10.234 fused_ordering(903) 00:18:10.234 fused_ordering(904) 00:18:10.234 fused_ordering(905) 00:18:10.234 fused_ordering(906) 00:18:10.234 fused_ordering(907) 00:18:10.234 fused_ordering(908) 00:18:10.234 fused_ordering(909) 00:18:10.234 fused_ordering(910) 00:18:10.234 fused_ordering(911) 00:18:10.234 fused_ordering(912) 00:18:10.234 fused_ordering(913) 00:18:10.234 fused_ordering(914) 00:18:10.234 fused_ordering(915) 00:18:10.234 fused_ordering(916) 00:18:10.234 fused_ordering(917) 00:18:10.234 fused_ordering(918) 00:18:10.234 fused_ordering(919) 00:18:10.234 fused_ordering(920) 00:18:10.234 fused_ordering(921) 00:18:10.234 fused_ordering(922) 00:18:10.234 fused_ordering(923) 00:18:10.234 fused_ordering(924) 00:18:10.234 fused_ordering(925) 00:18:10.234 fused_ordering(926) 00:18:10.234 fused_ordering(927) 00:18:10.234 fused_ordering(928) 00:18:10.234 fused_ordering(929) 00:18:10.234 fused_ordering(930) 00:18:10.234 fused_ordering(931) 00:18:10.234 fused_ordering(932) 00:18:10.234 fused_ordering(933) 00:18:10.234 fused_ordering(934) 00:18:10.234 fused_ordering(935) 00:18:10.234 fused_ordering(936) 00:18:10.234 fused_ordering(937) 00:18:10.234 fused_ordering(938) 00:18:10.234 fused_ordering(939) 00:18:10.234 fused_ordering(940) 00:18:10.234 fused_ordering(941) 00:18:10.234 fused_ordering(942) 00:18:10.234 fused_ordering(943) 00:18:10.234 fused_ordering(944) 00:18:10.235 fused_ordering(945) 00:18:10.235 fused_ordering(946) 00:18:10.235 fused_ordering(947) 00:18:10.235 fused_ordering(948) 00:18:10.235 fused_ordering(949) 00:18:10.235 fused_ordering(950) 00:18:10.235 fused_ordering(951) 00:18:10.235 fused_ordering(952) 00:18:10.235 fused_ordering(953) 00:18:10.235 fused_ordering(954) 00:18:10.235 fused_ordering(955) 00:18:10.235 fused_ordering(956) 00:18:10.235 fused_ordering(957) 00:18:10.235 fused_ordering(958) 00:18:10.235 fused_ordering(959) 00:18:10.235 fused_ordering(960) 00:18:10.235 fused_ordering(961) 00:18:10.235 fused_ordering(962) 00:18:10.235 fused_ordering(963) 00:18:10.235 fused_ordering(964) 00:18:10.235 fused_ordering(965) 00:18:10.235 fused_ordering(966) 00:18:10.235 fused_ordering(967) 00:18:10.235 fused_ordering(968) 00:18:10.235 fused_ordering(969) 00:18:10.235 fused_ordering(970) 00:18:10.235 fused_ordering(971) 00:18:10.235 fused_ordering(972) 00:18:10.235 fused_ordering(973) 00:18:10.235 fused_ordering(974) 00:18:10.235 fused_ordering(975) 00:18:10.235 fused_ordering(976) 00:18:10.235 fused_ordering(977) 00:18:10.235 fused_ordering(978) 00:18:10.235 fused_ordering(979) 00:18:10.235 fused_ordering(980) 00:18:10.235 fused_ordering(981) 00:18:10.235 fused_ordering(982) 00:18:10.235 fused_ordering(983) 00:18:10.235 fused_ordering(984) 00:18:10.235 fused_ordering(985) 00:18:10.235 fused_ordering(986) 00:18:10.235 fused_ordering(987) 00:18:10.235 fused_ordering(988) 00:18:10.235 fused_ordering(989) 00:18:10.235 fused_ordering(990) 00:18:10.235 fused_ordering(991) 00:18:10.235 fused_ordering(992) 00:18:10.235 fused_ordering(993) 00:18:10.235 fused_ordering(994) 00:18:10.235 fused_ordering(995) 00:18:10.235 fused_ordering(996) 00:18:10.235 fused_ordering(997) 00:18:10.235 fused_ordering(998) 00:18:10.235 fused_ordering(999) 00:18:10.235 fused_ordering(1000) 00:18:10.235 fused_ordering(1001) 00:18:10.235 fused_ordering(1002) 00:18:10.235 fused_ordering(1003) 00:18:10.235 fused_ordering(1004) 00:18:10.235 fused_ordering(1005) 00:18:10.235 fused_ordering(1006) 00:18:10.235 fused_ordering(1007) 00:18:10.235 fused_ordering(1008) 00:18:10.235 fused_ordering(1009) 00:18:10.235 fused_ordering(1010) 00:18:10.235 fused_ordering(1011) 00:18:10.235 fused_ordering(1012) 00:18:10.235 fused_ordering(1013) 00:18:10.235 fused_ordering(1014) 00:18:10.235 fused_ordering(1015) 00:18:10.235 fused_ordering(1016) 00:18:10.235 fused_ordering(1017) 00:18:10.235 fused_ordering(1018) 00:18:10.235 fused_ordering(1019) 00:18:10.235 fused_ordering(1020) 00:18:10.235 fused_ordering(1021) 00:18:10.235 fused_ordering(1022) 00:18:10.235 fused_ordering(1023) 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.235 rmmod nvme_tcp 00:18:10.235 rmmod nvme_fabrics 00:18:10.235 rmmod nvme_keyring 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2950635 ']' 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2950635 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2950635 ']' 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2950635 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950635 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950635' 00:18:10.235 killing process with pid 2950635 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2950635 00:18:10.235 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2950635 00:18:11.609 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.610 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.510 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.510 00:18:13.510 real 0m10.298s 00:18:13.510 user 0m8.703s 00:18:13.510 sys 0m3.617s 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.511 ************************************ 00:18:13.511 END TEST nvmf_fused_ordering 00:18:13.511 ************************************ 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.511 ************************************ 00:18:13.511 START TEST nvmf_ns_masking 00:18:13.511 ************************************ 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:13.511 * Looking for test storage... 00:18:13.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.511 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:13.770 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.771 --rc genhtml_branch_coverage=1 00:18:13.771 --rc genhtml_function_coverage=1 00:18:13.771 --rc genhtml_legend=1 00:18:13.771 --rc geninfo_all_blocks=1 00:18:13.771 --rc geninfo_unexecuted_blocks=1 00:18:13.771 00:18:13.771 ' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.771 --rc genhtml_branch_coverage=1 00:18:13.771 --rc genhtml_function_coverage=1 00:18:13.771 --rc genhtml_legend=1 00:18:13.771 --rc geninfo_all_blocks=1 00:18:13.771 --rc geninfo_unexecuted_blocks=1 00:18:13.771 00:18:13.771 ' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.771 --rc genhtml_branch_coverage=1 00:18:13.771 --rc genhtml_function_coverage=1 00:18:13.771 --rc genhtml_legend=1 00:18:13.771 --rc geninfo_all_blocks=1 00:18:13.771 --rc geninfo_unexecuted_blocks=1 00:18:13.771 00:18:13.771 ' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.771 --rc genhtml_branch_coverage=1 00:18:13.771 --rc genhtml_function_coverage=1 00:18:13.771 --rc genhtml_legend=1 00:18:13.771 --rc geninfo_all_blocks=1 00:18:13.771 --rc geninfo_unexecuted_blocks=1 00:18:13.771 00:18:13.771 ' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=09dda3d8-68b9-42b4-a31f-f65a3fc68475 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=75306b90-c318-46a3-9935-28b1ff133dce 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4ba5d518-4181-4146-bcd9-0b2cdf72f5f8 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.771 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.674 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:18:15.675 00:18:15.675 --- 10.0.0.2 ping statistics --- 00:18:15.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.675 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:18:15.675 00:18:15.675 --- 10.0.0.1 ping statistics --- 00:18:15.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.675 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2953150 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2953150 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2953150 ']' 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.675 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 [2024-11-18 18:25:14.009130] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:15.675 [2024-11-18 18:25:14.009277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.932 [2024-11-18 18:25:14.162750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.190 [2024-11-18 18:25:14.303434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.190 [2024-11-18 18:25:14.303521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.190 [2024-11-18 18:25:14.303553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.190 [2024-11-18 18:25:14.303580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.190 [2024-11-18 18:25:14.303600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.190 [2024-11-18 18:25:14.305302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.756 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.014 [2024-11-18 18:25:15.336690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.271 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:17.271 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:17.271 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:17.529 Malloc1 00:18:17.529 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:18.095 Malloc2 00:18:18.095 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:18.095 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:18.661 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.661 [2024-11-18 18:25:16.937401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.661 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:18.661 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ba5d518-4181-4146-bcd9-0b2cdf72f5f8 -a 10.0.0.2 -s 4420 -i 4 00:18:18.919 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.919 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.919 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.919 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:18.919 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.447 [ 0]:0x1 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.447 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=353340c1a64a4982ae263920cab3aa2a 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 353340c1a64a4982ae263920cab3aa2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.448 [ 0]:0x1 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=353340c1a64a4982ae263920cab3aa2a 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 353340c1a64a4982ae263920cab3aa2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.448 [ 1]:0x2 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:21.448 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.706 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.965 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ba5d518-4181-4146-bcd9-0b2cdf72f5f8 -a 10.0.0.2 -s 4420 -i 4 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:22.530 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.429 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.686 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.686 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.686 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:24.686 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.686 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.687 [ 0]:0x2 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.687 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.944 [ 0]:0x1 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=353340c1a64a4982ae263920cab3aa2a 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 353340c1a64a4982ae263920cab3aa2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.944 [ 1]:0x2 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.944 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.509 [ 0]:0x2 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.509 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.767 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:25.767 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ba5d518-4181-4146-bcd9-0b2cdf72f5f8 -a 10.0.0.2 -s 4420 -i 4 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:26.025 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:27.924 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:27.924 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.925 [ 0]:0x1 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=353340c1a64a4982ae263920cab3aa2a 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 353340c1a64a4982ae263920cab3aa2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.925 [ 1]:0x2 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.925 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.183 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:28.183 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.183 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.442 [ 0]:0x2 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:28.442 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.701 [2024-11-18 18:25:26.964788] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:28.701 request: 00:18:28.701 { 00:18:28.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.701 "nsid": 2, 00:18:28.701 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.701 "method": "nvmf_ns_remove_host", 00:18:28.701 "req_id": 1 00:18:28.701 } 00:18:28.701 Got JSON-RPC error response 00:18:28.701 response: 00:18:28.701 { 00:18:28.701 "code": -32602, 00:18:28.701 "message": "Invalid parameters" 00:18:28.701 } 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.701 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.701 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.959 [ 0]:0x2 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d44698270b654e1f8b1b51de2b08e179 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d44698270b654e1f8b1b51de2b08e179 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2954902 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2954902 /var/tmp/host.sock 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2954902 ']' 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:28.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.959 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:28.959 [2024-11-18 18:25:27.242690] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:28.959 [2024-11-18 18:25:27.242847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954902 ] 00:18:29.218 [2024-11-18 18:25:27.400999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.218 [2024-11-18 18:25:27.529547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.152 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.152 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:30.152 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.718 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:30.977 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 09dda3d8-68b9-42b4-a31f-f65a3fc68475 00:18:30.977 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:30.977 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09DDA3D868B942B4A31FF65A3FC68475 -i 00:18:31.235 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 75306b90-c318-46a3-9935-28b1ff133dce 00:18:31.235 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:31.235 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 75306B90C31846A3993528B1FF133DCE -i 00:18:31.493 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:31.750 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:32.008 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:32.008 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:32.574 nvme0n1 00:18:32.574 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:32.574 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:32.832 nvme1n2 00:18:32.832 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:32.832 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:32.832 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:32.832 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:32.832 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:33.090 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:33.090 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:33.090 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:33.090 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:33.348 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 09dda3d8-68b9-42b4-a31f-f65a3fc68475 == \0\9\d\d\a\3\d\8\-\6\8\b\9\-\4\2\b\4\-\a\3\1\f\-\f\6\5\a\3\f\c\6\8\4\7\5 ]] 00:18:33.348 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:33.348 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:33.348 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:33.606 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 75306b90-c318-46a3-9935-28b1ff133dce == \7\5\3\0\6\b\9\0\-\c\3\1\8\-\4\6\a\3\-\9\9\3\5\-\2\8\b\1\f\f\1\3\3\d\c\e ]] 00:18:33.606 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 09dda3d8-68b9-42b4-a31f-f65a3fc68475 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09DDA3D868B942B4A31FF65A3FC68475 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09DDA3D868B942B4A31FF65A3FC68475 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:34.173 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09DDA3D868B942B4A31FF65A3FC68475 00:18:34.432 [2024-11-18 18:25:32.748511] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:34.432 [2024-11-18 18:25:32.748596] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:34.432 [2024-11-18 18:25:32.748631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.432 request: 00:18:34.432 { 00:18:34.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.432 "namespace": { 00:18:34.432 "bdev_name": "invalid", 00:18:34.432 "nsid": 1, 00:18:34.432 "nguid": "09DDA3D868B942B4A31FF65A3FC68475", 00:18:34.432 "no_auto_visible": false 00:18:34.432 }, 00:18:34.432 "method": "nvmf_subsystem_add_ns", 00:18:34.432 "req_id": 1 00:18:34.432 } 00:18:34.432 Got JSON-RPC error response 00:18:34.432 response: 00:18:34.432 { 00:18:34.432 "code": -32602, 00:18:34.432 "message": "Invalid parameters" 00:18:34.432 } 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 09dda3d8-68b9-42b4-a31f-f65a3fc68475 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.690 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09DDA3D868B942B4A31FF65A3FC68475 -i 00:18:34.948 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:36.848 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:36.848 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:36.848 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2954902 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2954902 ']' 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2954902 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954902 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954902' 00:18:37.106 killing process with pid 2954902 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2954902 00:18:37.106 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2954902 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.635 rmmod nvme_tcp 00:18:39.635 rmmod nvme_fabrics 00:18:39.635 rmmod nvme_keyring 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2953150 ']' 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2953150 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2953150 ']' 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2953150 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953150 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953150' 00:18:39.635 killing process with pid 2953150 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2953150 00:18:39.635 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2953150 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.100 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.376 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.376 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.376 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.376 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.376 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.278 00:18:43.278 real 0m29.725s 00:18:43.278 user 0m44.575s 00:18:43.278 sys 0m4.756s 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:43.278 ************************************ 00:18:43.278 END TEST nvmf_ns_masking 00:18:43.278 ************************************ 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.278 ************************************ 00:18:43.278 START TEST nvmf_nvme_cli 00:18:43.278 ************************************ 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.278 * Looking for test storage... 00:18:43.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.278 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.537 --rc genhtml_branch_coverage=1 00:18:43.537 --rc genhtml_function_coverage=1 00:18:43.537 --rc genhtml_legend=1 00:18:43.537 --rc geninfo_all_blocks=1 00:18:43.537 --rc geninfo_unexecuted_blocks=1 00:18:43.537 00:18:43.537 ' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.537 --rc genhtml_branch_coverage=1 00:18:43.537 --rc genhtml_function_coverage=1 00:18:43.537 --rc genhtml_legend=1 00:18:43.537 --rc geninfo_all_blocks=1 00:18:43.537 --rc geninfo_unexecuted_blocks=1 00:18:43.537 00:18:43.537 ' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.537 --rc genhtml_branch_coverage=1 00:18:43.537 --rc genhtml_function_coverage=1 00:18:43.537 --rc genhtml_legend=1 00:18:43.537 --rc geninfo_all_blocks=1 00:18:43.537 --rc geninfo_unexecuted_blocks=1 00:18:43.537 00:18:43.537 ' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.537 --rc genhtml_branch_coverage=1 00:18:43.537 --rc genhtml_function_coverage=1 00:18:43.537 --rc genhtml_legend=1 00:18:43.537 --rc geninfo_all_blocks=1 00:18:43.537 --rc geninfo_unexecuted_blocks=1 00:18:43.537 00:18:43.537 ' 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.537 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.538 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.445 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:18:45.703 00:18:45.703 --- 10.0.0.2 ping statistics --- 00:18:45.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.703 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:45.703 00:18:45.703 --- 10.0.0.1 ping statistics --- 00:18:45.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.703 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2958336 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2958336 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2958336 ']' 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.703 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.703 [2024-11-18 18:25:43.953463] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:45.703 [2024-11-18 18:25:43.953605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.961 [2024-11-18 18:25:44.105350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.961 [2024-11-18 18:25:44.250243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.961 [2024-11-18 18:25:44.250325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.961 [2024-11-18 18:25:44.250352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.961 [2024-11-18 18:25:44.250381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.961 [2024-11-18 18:25:44.250401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.961 [2024-11-18 18:25:44.253157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.961 [2024-11-18 18:25:44.253222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.961 [2024-11-18 18:25:44.253275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.961 [2024-11-18 18:25:44.253281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 [2024-11-18 18:25:44.933731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 Malloc0 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 Malloc1 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 [2024-11-18 18:25:45.132014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.901 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:47.160 00:18:47.160 Discovery Log Number of Records 2, Generation counter 2 00:18:47.160 =====Discovery Log Entry 0====== 00:18:47.160 trtype: tcp 00:18:47.160 adrfam: ipv4 00:18:47.160 subtype: current discovery subsystem 00:18:47.160 treq: not required 00:18:47.160 portid: 0 00:18:47.160 trsvcid: 4420 00:18:47.160 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:47.160 traddr: 10.0.0.2 00:18:47.160 eflags: explicit discovery connections, duplicate discovery information 00:18:47.160 sectype: none 00:18:47.160 =====Discovery Log Entry 1====== 00:18:47.160 trtype: tcp 00:18:47.160 adrfam: ipv4 00:18:47.160 subtype: nvme subsystem 00:18:47.160 treq: not required 00:18:47.160 portid: 0 00:18:47.160 trsvcid: 4420 00:18:47.160 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:47.160 traddr: 10.0.0.2 00:18:47.160 eflags: none 00:18:47.160 sectype: none 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:47.160 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:47.726 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.624 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:49.882 /dev/nvme0n2 ]] 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:49.882 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:50.140 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.398 rmmod nvme_tcp 00:18:50.398 rmmod nvme_fabrics 00:18:50.398 rmmod nvme_keyring 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2958336 ']' 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2958336 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2958336 ']' 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2958336 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958336 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958336' 00:18:50.398 killing process with pid 2958336 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2958336 00:18:50.398 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2958336 00:18:51.774 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.774 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.774 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.774 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.031 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.932 00:18:53.932 real 0m10.648s 00:18:53.932 user 0m22.859s 00:18:53.932 sys 0m2.594s 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.932 ************************************ 00:18:53.932 END TEST nvmf_nvme_cli 00:18:53.932 ************************************ 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.932 ************************************ 00:18:53.932 START TEST nvmf_auth_target 00:18:53.932 ************************************ 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:53.932 * Looking for test storage... 00:18:53.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:53.932 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.191 --rc genhtml_branch_coverage=1 00:18:54.191 --rc genhtml_function_coverage=1 00:18:54.191 --rc genhtml_legend=1 00:18:54.191 --rc geninfo_all_blocks=1 00:18:54.191 --rc geninfo_unexecuted_blocks=1 00:18:54.191 00:18:54.191 ' 00:18:54.191 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.191 --rc genhtml_branch_coverage=1 00:18:54.191 --rc genhtml_function_coverage=1 00:18:54.191 --rc genhtml_legend=1 00:18:54.191 --rc geninfo_all_blocks=1 00:18:54.191 --rc geninfo_unexecuted_blocks=1 00:18:54.191 00:18:54.191 ' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.192 --rc genhtml_branch_coverage=1 00:18:54.192 --rc genhtml_function_coverage=1 00:18:54.192 --rc genhtml_legend=1 00:18:54.192 --rc geninfo_all_blocks=1 00:18:54.192 --rc geninfo_unexecuted_blocks=1 00:18:54.192 00:18:54.192 ' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.192 --rc genhtml_branch_coverage=1 00:18:54.192 --rc genhtml_function_coverage=1 00:18:54.192 --rc genhtml_legend=1 00:18:54.192 --rc geninfo_all_blocks=1 00:18:54.192 --rc geninfo_unexecuted_blocks=1 00:18:54.192 00:18:54.192 ' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.192 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.092 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:56.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:56.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:56.093 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:56.093 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:56.093 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:56.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:18:56.352 00:18:56.352 --- 10.0.0.2 ping statistics --- 00:18:56.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.352 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:18:56.352 00:18:56.352 --- 10.0.0.1 ping statistics --- 00:18:56.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.352 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2960991 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2960991 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2960991 ']' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.352 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2961143 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:57.286 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:57.544 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d49eeb0ef78a24f75ee34e17ec9bbd4b8963e6ff650646b2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iw9 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d49eeb0ef78a24f75ee34e17ec9bbd4b8963e6ff650646b2 0 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d49eeb0ef78a24f75ee34e17ec9bbd4b8963e6ff650646b2 0 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d49eeb0ef78a24f75ee34e17ec9bbd4b8963e6ff650646b2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iw9 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iw9 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.iw9 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=91c39d9578c794f191f2c6a8dc4f0c612d14b037c06bde025b78906b47fd012a 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mYf 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 91c39d9578c794f191f2c6a8dc4f0c612d14b037c06bde025b78906b47fd012a 3 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 91c39d9578c794f191f2c6a8dc4f0c612d14b037c06bde025b78906b47fd012a 3 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=91c39d9578c794f191f2c6a8dc4f0c612d14b037c06bde025b78906b47fd012a 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mYf 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mYf 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.mYf 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f2cff3c0e1ca58ee8fa5ff0b2e04e44 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.28u 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f2cff3c0e1ca58ee8fa5ff0b2e04e44 1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f2cff3c0e1ca58ee8fa5ff0b2e04e44 1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f2cff3c0e1ca58ee8fa5ff0b2e04e44 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.28u 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.28u 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.28u 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=965c38414240c444948a00bba92ea7d0e188929a273d4c5a 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.l84 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 965c38414240c444948a00bba92ea7d0e188929a273d4c5a 2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 965c38414240c444948a00bba92ea7d0e188929a273d4c5a 2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=965c38414240c444948a00bba92ea7d0e188929a273d4c5a 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.l84 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.l84 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.l84 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cd39f5355f8e62216932978edf31fbc17a84b186fb394c8c 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9sb 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cd39f5355f8e62216932978edf31fbc17a84b186fb394c8c 2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cd39f5355f8e62216932978edf31fbc17a84b186fb394c8c 2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cd39f5355f8e62216932978edf31fbc17a84b186fb394c8c 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9sb 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9sb 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9sb 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=248d7e4e1744fce939aefd088b6aa9b5 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zsk 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 248d7e4e1744fce939aefd088b6aa9b5 1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 248d7e4e1744fce939aefd088b6aa9b5 1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=248d7e4e1744fce939aefd088b6aa9b5 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:57.545 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zsk 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zsk 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Zsk 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d1724d2180585fcd84a2d6a5eea976fdec7d45ea1dfc644661f475f6a2593707 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JQl 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d1724d2180585fcd84a2d6a5eea976fdec7d45ea1dfc644661f475f6a2593707 3 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d1724d2180585fcd84a2d6a5eea976fdec7d45ea1dfc644661f475f6a2593707 3 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.803 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d1724d2180585fcd84a2d6a5eea976fdec7d45ea1dfc644661f475f6a2593707 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JQl 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JQl 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.JQl 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2960991 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2960991 ']' 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.804 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2961143 /var/tmp/host.sock 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2961143 ']' 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:58.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.062 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iw9 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iw9 00:18:58.628 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iw9 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.mYf ]] 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mYf 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mYf 00:18:58.887 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mYf 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.28u 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.28u 00:18:59.145 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.28u 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.l84 ]] 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l84 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l84 00:18:59.403 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l84 00:18:59.660 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:59.660 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9sb 00:18:59.660 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.660 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.918 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.918 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9sb 00:18:59.918 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9sb 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Zsk ]] 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zsk 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zsk 00:19:00.176 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zsk 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JQl 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JQl 00:19:00.434 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JQl 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.692 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.949 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.950 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.207 00:19:01.207 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.207 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.207 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.465 { 00:19:01.465 "cntlid": 1, 00:19:01.465 "qid": 0, 00:19:01.465 "state": "enabled", 00:19:01.465 "thread": "nvmf_tgt_poll_group_000", 00:19:01.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.465 "listen_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.2", 00:19:01.465 "trsvcid": "4420" 00:19:01.465 }, 00:19:01.465 "peer_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.1", 00:19:01.465 "trsvcid": "46704" 00:19:01.465 }, 00:19:01.465 "auth": { 00:19:01.465 "state": "completed", 00:19:01.465 "digest": "sha256", 00:19:01.465 "dhgroup": "null" 00:19:01.465 } 00:19:01.465 } 00:19:01.465 ]' 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:01.465 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.723 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.723 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.723 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.982 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:01.982 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.916 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.174 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.432 00:19:03.432 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.432 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.432 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.690 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.690 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.690 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.690 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.690 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.690 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.690 { 00:19:03.690 "cntlid": 3, 00:19:03.690 "qid": 0, 00:19:03.690 "state": "enabled", 00:19:03.690 "thread": "nvmf_tgt_poll_group_000", 00:19:03.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:03.690 "listen_address": { 00:19:03.690 "trtype": "TCP", 00:19:03.690 "adrfam": "IPv4", 00:19:03.690 "traddr": "10.0.0.2", 00:19:03.690 "trsvcid": "4420" 00:19:03.690 }, 00:19:03.690 "peer_address": { 00:19:03.690 "trtype": "TCP", 00:19:03.690 "adrfam": "IPv4", 00:19:03.690 "traddr": "10.0.0.1", 00:19:03.690 "trsvcid": "56260" 00:19:03.690 }, 00:19:03.690 "auth": { 00:19:03.690 "state": "completed", 00:19:03.690 "digest": "sha256", 00:19:03.690 "dhgroup": "null" 00:19:03.690 } 00:19:03.690 } 00:19:03.690 ]' 00:19:03.690 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.948 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.207 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:04.207 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.141 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.400 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.658 00:19:05.658 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.658 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.658 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.916 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.916 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.916 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.916 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.174 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.175 { 00:19:06.175 "cntlid": 5, 00:19:06.175 "qid": 0, 00:19:06.175 "state": "enabled", 00:19:06.175 "thread": "nvmf_tgt_poll_group_000", 00:19:06.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.175 "listen_address": { 00:19:06.175 "trtype": "TCP", 00:19:06.175 "adrfam": "IPv4", 00:19:06.175 "traddr": "10.0.0.2", 00:19:06.175 "trsvcid": "4420" 00:19:06.175 }, 00:19:06.175 "peer_address": { 00:19:06.175 "trtype": "TCP", 00:19:06.175 "adrfam": "IPv4", 00:19:06.175 "traddr": "10.0.0.1", 00:19:06.175 "trsvcid": "56284" 00:19:06.175 }, 00:19:06.175 "auth": { 00:19:06.175 "state": "completed", 00:19:06.175 "digest": "sha256", 00:19:06.175 "dhgroup": "null" 00:19:06.175 } 00:19:06.175 } 00:19:06.175 ]' 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.175 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.433 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:06.433 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.368 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.934 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.192 00:19:08.192 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.192 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.192 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.450 { 00:19:08.450 "cntlid": 7, 00:19:08.450 "qid": 0, 00:19:08.450 "state": "enabled", 00:19:08.450 "thread": "nvmf_tgt_poll_group_000", 00:19:08.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.450 "listen_address": { 00:19:08.450 "trtype": "TCP", 00:19:08.450 "adrfam": "IPv4", 00:19:08.450 "traddr": "10.0.0.2", 00:19:08.450 "trsvcid": "4420" 00:19:08.450 }, 00:19:08.450 "peer_address": { 00:19:08.450 "trtype": "TCP", 00:19:08.450 "adrfam": "IPv4", 00:19:08.450 "traddr": "10.0.0.1", 00:19:08.450 "trsvcid": "56310" 00:19:08.450 }, 00:19:08.450 "auth": { 00:19:08.450 "state": "completed", 00:19:08.450 "digest": "sha256", 00:19:08.450 "dhgroup": "null" 00:19:08.450 } 00:19:08.450 } 00:19:08.450 ]' 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.450 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.708 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:08.708 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:09.641 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.898 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.899 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.156 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.414 00:19:10.414 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.414 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.414 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.673 { 00:19:10.673 "cntlid": 9, 00:19:10.673 "qid": 0, 00:19:10.673 "state": "enabled", 00:19:10.673 "thread": "nvmf_tgt_poll_group_000", 00:19:10.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.673 "listen_address": { 00:19:10.673 "trtype": "TCP", 00:19:10.673 "adrfam": "IPv4", 00:19:10.673 "traddr": "10.0.0.2", 00:19:10.673 "trsvcid": "4420" 00:19:10.673 }, 00:19:10.673 "peer_address": { 00:19:10.673 "trtype": "TCP", 00:19:10.673 "adrfam": "IPv4", 00:19:10.673 "traddr": "10.0.0.1", 00:19:10.673 "trsvcid": "56344" 00:19:10.673 }, 00:19:10.673 "auth": { 00:19:10.673 "state": "completed", 00:19:10.673 "digest": "sha256", 00:19:10.673 "dhgroup": "ffdhe2048" 00:19:10.673 } 00:19:10.673 } 00:19:10.673 ]' 00:19:10.673 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.931 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.188 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:11.188 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.211 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.494 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.753 00:19:12.753 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.753 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.753 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.011 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.011 { 00:19:13.011 "cntlid": 11, 00:19:13.011 "qid": 0, 00:19:13.011 "state": "enabled", 00:19:13.011 "thread": "nvmf_tgt_poll_group_000", 00:19:13.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.011 "listen_address": { 00:19:13.011 "trtype": "TCP", 00:19:13.011 "adrfam": "IPv4", 00:19:13.011 "traddr": "10.0.0.2", 00:19:13.012 "trsvcid": "4420" 00:19:13.012 }, 00:19:13.012 "peer_address": { 00:19:13.012 "trtype": "TCP", 00:19:13.012 "adrfam": "IPv4", 00:19:13.012 "traddr": "10.0.0.1", 00:19:13.012 "trsvcid": "39428" 00:19:13.012 }, 00:19:13.012 "auth": { 00:19:13.012 "state": "completed", 00:19:13.012 "digest": "sha256", 00:19:13.012 "dhgroup": "ffdhe2048" 00:19:13.012 } 00:19:13.012 } 00:19:13.012 ]' 00:19:13.012 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.012 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.012 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.012 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.012 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.270 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.270 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.270 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.528 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:13.528 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:14.461 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.462 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.720 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.978 00:19:14.978 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.978 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.978 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.235 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.235 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.236 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.236 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.236 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.236 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.236 { 00:19:15.236 "cntlid": 13, 00:19:15.236 "qid": 0, 00:19:15.236 "state": "enabled", 00:19:15.236 "thread": "nvmf_tgt_poll_group_000", 00:19:15.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.236 "listen_address": { 00:19:15.236 "trtype": "TCP", 00:19:15.236 "adrfam": "IPv4", 00:19:15.236 "traddr": "10.0.0.2", 00:19:15.236 "trsvcid": "4420" 00:19:15.236 }, 00:19:15.236 "peer_address": { 00:19:15.236 "trtype": "TCP", 00:19:15.236 "adrfam": "IPv4", 00:19:15.236 "traddr": "10.0.0.1", 00:19:15.236 "trsvcid": "39452" 00:19:15.236 }, 00:19:15.236 "auth": { 00:19:15.236 "state": "completed", 00:19:15.236 "digest": "sha256", 00:19:15.236 "dhgroup": "ffdhe2048" 00:19:15.236 } 00:19:15.236 } 00:19:15.236 ]' 00:19:15.236 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.494 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.752 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:15.752 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.687 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.946 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.204 00:19:17.204 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.204 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.204 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.462 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.462 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.462 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.462 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.720 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.720 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.720 { 00:19:17.720 "cntlid": 15, 00:19:17.720 "qid": 0, 00:19:17.720 "state": "enabled", 00:19:17.720 "thread": "nvmf_tgt_poll_group_000", 00:19:17.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.720 "listen_address": { 00:19:17.720 "trtype": "TCP", 00:19:17.720 "adrfam": "IPv4", 00:19:17.720 "traddr": "10.0.0.2", 00:19:17.720 "trsvcid": "4420" 00:19:17.720 }, 00:19:17.720 "peer_address": { 00:19:17.720 "trtype": "TCP", 00:19:17.721 "adrfam": "IPv4", 00:19:17.721 "traddr": "10.0.0.1", 00:19:17.721 "trsvcid": "39478" 00:19:17.721 }, 00:19:17.721 "auth": { 00:19:17.721 "state": "completed", 00:19:17.721 "digest": "sha256", 00:19:17.721 "dhgroup": "ffdhe2048" 00:19:17.721 } 00:19:17.721 } 00:19:17.721 ]' 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.721 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.979 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:17.979 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.912 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.171 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.172 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.172 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.172 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.172 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.738 00:19:19.738 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.738 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.738 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.995 { 00:19:19.995 "cntlid": 17, 00:19:19.995 "qid": 0, 00:19:19.995 "state": "enabled", 00:19:19.995 "thread": "nvmf_tgt_poll_group_000", 00:19:19.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.995 "listen_address": { 00:19:19.995 "trtype": "TCP", 00:19:19.995 "adrfam": "IPv4", 00:19:19.995 "traddr": "10.0.0.2", 00:19:19.995 "trsvcid": "4420" 00:19:19.995 }, 00:19:19.995 "peer_address": { 00:19:19.995 "trtype": "TCP", 00:19:19.995 "adrfam": "IPv4", 00:19:19.995 "traddr": "10.0.0.1", 00:19:19.995 "trsvcid": "39496" 00:19:19.995 }, 00:19:19.995 "auth": { 00:19:19.995 "state": "completed", 00:19:19.995 "digest": "sha256", 00:19:19.995 "dhgroup": "ffdhe3072" 00:19:19.995 } 00:19:19.995 } 00:19:19.995 ]' 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.995 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.253 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:20.253 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.625 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.190 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.190 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.447 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.447 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.447 { 00:19:22.447 "cntlid": 19, 00:19:22.447 "qid": 0, 00:19:22.447 "state": "enabled", 00:19:22.447 "thread": "nvmf_tgt_poll_group_000", 00:19:22.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.447 "listen_address": { 00:19:22.447 "trtype": "TCP", 00:19:22.447 "adrfam": "IPv4", 00:19:22.447 "traddr": "10.0.0.2", 00:19:22.447 "trsvcid": "4420" 00:19:22.447 }, 00:19:22.447 "peer_address": { 00:19:22.447 "trtype": "TCP", 00:19:22.447 "adrfam": "IPv4", 00:19:22.447 "traddr": "10.0.0.1", 00:19:22.447 "trsvcid": "39514" 00:19:22.447 }, 00:19:22.447 "auth": { 00:19:22.447 "state": "completed", 00:19:22.447 "digest": "sha256", 00:19:22.447 "dhgroup": "ffdhe3072" 00:19:22.447 } 00:19:22.447 } 00:19:22.447 ]' 00:19:22.447 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.447 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.447 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.448 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.448 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.448 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.448 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.448 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.706 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:22.706 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.640 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.898 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.463 00:19:24.463 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.463 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.463 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.722 { 00:19:24.722 "cntlid": 21, 00:19:24.722 "qid": 0, 00:19:24.722 "state": "enabled", 00:19:24.722 "thread": "nvmf_tgt_poll_group_000", 00:19:24.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.722 "listen_address": { 00:19:24.722 "trtype": "TCP", 00:19:24.722 "adrfam": "IPv4", 00:19:24.722 "traddr": "10.0.0.2", 00:19:24.722 "trsvcid": "4420" 00:19:24.722 }, 00:19:24.722 "peer_address": { 00:19:24.722 "trtype": "TCP", 00:19:24.722 "adrfam": "IPv4", 00:19:24.722 "traddr": "10.0.0.1", 00:19:24.722 "trsvcid": "32858" 00:19:24.722 }, 00:19:24.722 "auth": { 00:19:24.722 "state": "completed", 00:19:24.722 "digest": "sha256", 00:19:24.722 "dhgroup": "ffdhe3072" 00:19:24.722 } 00:19:24.722 } 00:19:24.722 ]' 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.722 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.981 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:24.981 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.914 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.480 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.738 00:19:26.738 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.738 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.738 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.996 { 00:19:26.996 "cntlid": 23, 00:19:26.996 "qid": 0, 00:19:26.996 "state": "enabled", 00:19:26.996 "thread": "nvmf_tgt_poll_group_000", 00:19:26.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.996 "listen_address": { 00:19:26.996 "trtype": "TCP", 00:19:26.996 "adrfam": "IPv4", 00:19:26.996 "traddr": "10.0.0.2", 00:19:26.996 "trsvcid": "4420" 00:19:26.996 }, 00:19:26.996 "peer_address": { 00:19:26.996 "trtype": "TCP", 00:19:26.996 "adrfam": "IPv4", 00:19:26.996 "traddr": "10.0.0.1", 00:19:26.996 "trsvcid": "32874" 00:19:26.996 }, 00:19:26.996 "auth": { 00:19:26.996 "state": "completed", 00:19:26.996 "digest": "sha256", 00:19:26.996 "dhgroup": "ffdhe3072" 00:19:26.996 } 00:19:26.996 } 00:19:26.996 ]' 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.996 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.254 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:27.254 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:28.186 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.445 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.703 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.960 00:19:28.960 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.960 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.960 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.218 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.218 { 00:19:29.218 "cntlid": 25, 00:19:29.218 "qid": 0, 00:19:29.218 "state": "enabled", 00:19:29.218 "thread": "nvmf_tgt_poll_group_000", 00:19:29.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:29.218 "listen_address": { 00:19:29.218 "trtype": "TCP", 00:19:29.219 "adrfam": "IPv4", 00:19:29.219 "traddr": "10.0.0.2", 00:19:29.219 "trsvcid": "4420" 00:19:29.219 }, 00:19:29.219 "peer_address": { 00:19:29.219 "trtype": "TCP", 00:19:29.219 "adrfam": "IPv4", 00:19:29.219 "traddr": "10.0.0.1", 00:19:29.219 "trsvcid": "32902" 00:19:29.219 }, 00:19:29.219 "auth": { 00:19:29.219 "state": "completed", 00:19:29.219 "digest": "sha256", 00:19:29.219 "dhgroup": "ffdhe4096" 00:19:29.219 } 00:19:29.219 } 00:19:29.219 ]' 00:19:29.219 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.477 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.735 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:29.735 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:30.667 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.667 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.667 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.667 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.667 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.668 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.668 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.668 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.926 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.491 00:19:31.491 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.491 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.491 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.750 { 00:19:31.750 "cntlid": 27, 00:19:31.750 "qid": 0, 00:19:31.750 "state": "enabled", 00:19:31.750 "thread": "nvmf_tgt_poll_group_000", 00:19:31.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.750 "listen_address": { 00:19:31.750 "trtype": "TCP", 00:19:31.750 "adrfam": "IPv4", 00:19:31.750 "traddr": "10.0.0.2", 00:19:31.750 "trsvcid": "4420" 00:19:31.750 }, 00:19:31.750 "peer_address": { 00:19:31.750 "trtype": "TCP", 00:19:31.750 "adrfam": "IPv4", 00:19:31.750 "traddr": "10.0.0.1", 00:19:31.750 "trsvcid": "32924" 00:19:31.750 }, 00:19:31.750 "auth": { 00:19:31.750 "state": "completed", 00:19:31.750 "digest": "sha256", 00:19:31.750 "dhgroup": "ffdhe4096" 00:19:31.750 } 00:19:31.750 } 00:19:31.750 ]' 00:19:31.750 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.750 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.750 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.750 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.750 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.009 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.009 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.009 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.267 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:32.267 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.200 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.458 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.023 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.023 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.281 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.281 { 00:19:34.281 "cntlid": 29, 00:19:34.281 "qid": 0, 00:19:34.281 "state": "enabled", 00:19:34.281 "thread": "nvmf_tgt_poll_group_000", 00:19:34.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:34.281 "listen_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.2", 00:19:34.281 "trsvcid": "4420" 00:19:34.281 }, 00:19:34.281 "peer_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.1", 00:19:34.281 "trsvcid": "49532" 00:19:34.282 }, 00:19:34.282 "auth": { 00:19:34.282 "state": "completed", 00:19:34.282 "digest": "sha256", 00:19:34.282 "dhgroup": "ffdhe4096" 00:19:34.282 } 00:19:34.282 } 00:19:34.282 ]' 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.282 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.539 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:34.539 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.472 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.473 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.473 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.039 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.296 00:19:36.296 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.296 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.296 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.554 { 00:19:36.554 "cntlid": 31, 00:19:36.554 "qid": 0, 00:19:36.554 "state": "enabled", 00:19:36.554 "thread": "nvmf_tgt_poll_group_000", 00:19:36.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.554 "listen_address": { 00:19:36.554 "trtype": "TCP", 00:19:36.554 "adrfam": "IPv4", 00:19:36.554 "traddr": "10.0.0.2", 00:19:36.554 "trsvcid": "4420" 00:19:36.554 }, 00:19:36.554 "peer_address": { 00:19:36.554 "trtype": "TCP", 00:19:36.554 "adrfam": "IPv4", 00:19:36.554 "traddr": "10.0.0.1", 00:19:36.554 "trsvcid": "49552" 00:19:36.554 }, 00:19:36.554 "auth": { 00:19:36.554 "state": "completed", 00:19:36.554 "digest": "sha256", 00:19:36.554 "dhgroup": "ffdhe4096" 00:19:36.554 } 00:19:36.554 } 00:19:36.554 ]' 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.554 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.811 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.812 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.812 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.812 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.812 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.069 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:37.069 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.002 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.260 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.827 00:19:38.827 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.827 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.827 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.085 { 00:19:39.085 "cntlid": 33, 00:19:39.085 "qid": 0, 00:19:39.085 "state": "enabled", 00:19:39.085 "thread": "nvmf_tgt_poll_group_000", 00:19:39.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.085 "listen_address": { 00:19:39.085 "trtype": "TCP", 00:19:39.085 "adrfam": "IPv4", 00:19:39.085 "traddr": "10.0.0.2", 00:19:39.085 "trsvcid": "4420" 00:19:39.085 }, 00:19:39.085 "peer_address": { 00:19:39.085 "trtype": "TCP", 00:19:39.085 "adrfam": "IPv4", 00:19:39.085 "traddr": "10.0.0.1", 00:19:39.085 "trsvcid": "49576" 00:19:39.085 }, 00:19:39.085 "auth": { 00:19:39.085 "state": "completed", 00:19:39.085 "digest": "sha256", 00:19:39.085 "dhgroup": "ffdhe6144" 00:19:39.085 } 00:19:39.085 } 00:19:39.085 ]' 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.085 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.343 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.343 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.343 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.344 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.344 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.601 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:39.601 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.534 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.792 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.724 00:19:41.724 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.724 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.724 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.724 { 00:19:41.724 "cntlid": 35, 00:19:41.724 "qid": 0, 00:19:41.724 "state": "enabled", 00:19:41.724 "thread": "nvmf_tgt_poll_group_000", 00:19:41.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.724 "listen_address": { 00:19:41.724 "trtype": "TCP", 00:19:41.724 "adrfam": "IPv4", 00:19:41.724 "traddr": "10.0.0.2", 00:19:41.724 "trsvcid": "4420" 00:19:41.724 }, 00:19:41.724 "peer_address": { 00:19:41.724 "trtype": "TCP", 00:19:41.724 "adrfam": "IPv4", 00:19:41.724 "traddr": "10.0.0.1", 00:19:41.724 "trsvcid": "49594" 00:19:41.724 }, 00:19:41.724 "auth": { 00:19:41.724 "state": "completed", 00:19:41.724 "digest": "sha256", 00:19:41.724 "dhgroup": "ffdhe6144" 00:19:41.724 } 00:19:41.724 } 00:19:41.724 ]' 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.724 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.010 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.010 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.010 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.010 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.010 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.294 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:42.294 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.227 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.485 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.050 00:19:44.050 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.050 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.050 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.306 { 00:19:44.306 "cntlid": 37, 00:19:44.306 "qid": 0, 00:19:44.306 "state": "enabled", 00:19:44.306 "thread": "nvmf_tgt_poll_group_000", 00:19:44.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.306 "listen_address": { 00:19:44.306 "trtype": "TCP", 00:19:44.306 "adrfam": "IPv4", 00:19:44.306 "traddr": "10.0.0.2", 00:19:44.306 "trsvcid": "4420" 00:19:44.306 }, 00:19:44.306 "peer_address": { 00:19:44.306 "trtype": "TCP", 00:19:44.306 "adrfam": "IPv4", 00:19:44.306 "traddr": "10.0.0.1", 00:19:44.306 "trsvcid": "56112" 00:19:44.306 }, 00:19:44.306 "auth": { 00:19:44.306 "state": "completed", 00:19:44.306 "digest": "sha256", 00:19:44.306 "dhgroup": "ffdhe6144" 00:19:44.306 } 00:19:44.306 } 00:19:44.306 ]' 00:19:44.306 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.563 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.821 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:44.821 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:45.753 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.318 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.883 00:19:46.883 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.883 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.883 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.141 { 00:19:47.141 "cntlid": 39, 00:19:47.141 "qid": 0, 00:19:47.141 "state": "enabled", 00:19:47.141 "thread": "nvmf_tgt_poll_group_000", 00:19:47.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.141 "listen_address": { 00:19:47.141 "trtype": "TCP", 00:19:47.141 "adrfam": "IPv4", 00:19:47.141 "traddr": "10.0.0.2", 00:19:47.141 "trsvcid": "4420" 00:19:47.141 }, 00:19:47.141 "peer_address": { 00:19:47.141 "trtype": "TCP", 00:19:47.141 "adrfam": "IPv4", 00:19:47.141 "traddr": "10.0.0.1", 00:19:47.141 "trsvcid": "56148" 00:19:47.141 }, 00:19:47.141 "auth": { 00:19:47.141 "state": "completed", 00:19:47.141 "digest": "sha256", 00:19:47.141 "dhgroup": "ffdhe6144" 00:19:47.141 } 00:19:47.141 } 00:19:47.141 ]' 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.141 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.399 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:47.399 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.331 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.589 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.523 00:19:49.523 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.523 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.523 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.781 { 00:19:49.781 "cntlid": 41, 00:19:49.781 "qid": 0, 00:19:49.781 "state": "enabled", 00:19:49.781 "thread": "nvmf_tgt_poll_group_000", 00:19:49.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.781 "listen_address": { 00:19:49.781 "trtype": "TCP", 00:19:49.781 "adrfam": "IPv4", 00:19:49.781 "traddr": "10.0.0.2", 00:19:49.781 "trsvcid": "4420" 00:19:49.781 }, 00:19:49.781 "peer_address": { 00:19:49.781 "trtype": "TCP", 00:19:49.781 "adrfam": "IPv4", 00:19:49.781 "traddr": "10.0.0.1", 00:19:49.781 "trsvcid": "56172" 00:19:49.781 }, 00:19:49.781 "auth": { 00:19:49.781 "state": "completed", 00:19:49.781 "digest": "sha256", 00:19:49.781 "dhgroup": "ffdhe8192" 00:19:49.781 } 00:19:49.781 } 00:19:49.781 ]' 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.781 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.038 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.038 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.038 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.296 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:50.296 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.228 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.486 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.420 00:19:52.420 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.420 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.420 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.678 { 00:19:52.678 "cntlid": 43, 00:19:52.678 "qid": 0, 00:19:52.678 "state": "enabled", 00:19:52.678 "thread": "nvmf_tgt_poll_group_000", 00:19:52.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.678 "listen_address": { 00:19:52.678 "trtype": "TCP", 00:19:52.678 "adrfam": "IPv4", 00:19:52.678 "traddr": "10.0.0.2", 00:19:52.678 "trsvcid": "4420" 00:19:52.678 }, 00:19:52.678 "peer_address": { 00:19:52.678 "trtype": "TCP", 00:19:52.678 "adrfam": "IPv4", 00:19:52.678 "traddr": "10.0.0.1", 00:19:52.678 "trsvcid": "56186" 00:19:52.678 }, 00:19:52.678 "auth": { 00:19:52.678 "state": "completed", 00:19:52.678 "digest": "sha256", 00:19:52.678 "dhgroup": "ffdhe8192" 00:19:52.678 } 00:19:52.678 } 00:19:52.678 ]' 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.678 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.936 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.936 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.936 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.936 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.936 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.193 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:53.194 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.128 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.694 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.627 00:19:55.627 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.627 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.627 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.885 { 00:19:55.885 "cntlid": 45, 00:19:55.885 "qid": 0, 00:19:55.885 "state": "enabled", 00:19:55.885 "thread": "nvmf_tgt_poll_group_000", 00:19:55.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.885 "listen_address": { 00:19:55.885 "trtype": "TCP", 00:19:55.885 "adrfam": "IPv4", 00:19:55.885 "traddr": "10.0.0.2", 00:19:55.885 "trsvcid": "4420" 00:19:55.885 }, 00:19:55.885 "peer_address": { 00:19:55.885 "trtype": "TCP", 00:19:55.885 "adrfam": "IPv4", 00:19:55.885 "traddr": "10.0.0.1", 00:19:55.885 "trsvcid": "39512" 00:19:55.885 }, 00:19:55.885 "auth": { 00:19:55.885 "state": "completed", 00:19:55.885 "digest": "sha256", 00:19:55.885 "dhgroup": "ffdhe8192" 00:19:55.885 } 00:19:55.885 } 00:19:55.885 ]' 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.885 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.143 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:56.143 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.078 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.644 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.578 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.578 { 00:19:58.578 "cntlid": 47, 00:19:58.578 "qid": 0, 00:19:58.578 "state": "enabled", 00:19:58.578 "thread": "nvmf_tgt_poll_group_000", 00:19:58.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.578 "listen_address": { 00:19:58.578 "trtype": "TCP", 00:19:58.578 "adrfam": "IPv4", 00:19:58.578 "traddr": "10.0.0.2", 00:19:58.578 "trsvcid": "4420" 00:19:58.578 }, 00:19:58.578 "peer_address": { 00:19:58.578 "trtype": "TCP", 00:19:58.578 "adrfam": "IPv4", 00:19:58.578 "traddr": "10.0.0.1", 00:19:58.578 "trsvcid": "39534" 00:19:58.578 }, 00:19:58.578 "auth": { 00:19:58.578 "state": "completed", 00:19:58.578 "digest": "sha256", 00:19:58.578 "dhgroup": "ffdhe8192" 00:19:58.578 } 00:19:58.578 } 00:19:58.578 ]' 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.578 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.836 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.836 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.836 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.836 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.836 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.094 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:19:59.094 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.027 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.285 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.543 00:20:00.543 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.543 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.543 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.801 { 00:20:00.801 "cntlid": 49, 00:20:00.801 "qid": 0, 00:20:00.801 "state": "enabled", 00:20:00.801 "thread": "nvmf_tgt_poll_group_000", 00:20:00.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.801 "listen_address": { 00:20:00.801 "trtype": "TCP", 00:20:00.801 "adrfam": "IPv4", 00:20:00.801 "traddr": "10.0.0.2", 00:20:00.801 "trsvcid": "4420" 00:20:00.801 }, 00:20:00.801 "peer_address": { 00:20:00.801 "trtype": "TCP", 00:20:00.801 "adrfam": "IPv4", 00:20:00.801 "traddr": "10.0.0.1", 00:20:00.801 "trsvcid": "39556" 00:20:00.801 }, 00:20:00.801 "auth": { 00:20:00.801 "state": "completed", 00:20:00.801 "digest": "sha384", 00:20:00.801 "dhgroup": "null" 00:20:00.801 } 00:20:00.801 } 00:20:00.801 ]' 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.801 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.058 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.058 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.058 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.058 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.058 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.315 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:01.316 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.249 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.508 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.766 00:20:02.766 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.766 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.766 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.330 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.330 { 00:20:03.330 "cntlid": 51, 00:20:03.330 "qid": 0, 00:20:03.330 "state": "enabled", 00:20:03.330 "thread": "nvmf_tgt_poll_group_000", 00:20:03.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.330 "listen_address": { 00:20:03.330 "trtype": "TCP", 00:20:03.330 "adrfam": "IPv4", 00:20:03.330 "traddr": "10.0.0.2", 00:20:03.330 "trsvcid": "4420" 00:20:03.330 }, 00:20:03.330 "peer_address": { 00:20:03.330 "trtype": "TCP", 00:20:03.330 "adrfam": "IPv4", 00:20:03.330 "traddr": "10.0.0.1", 00:20:03.330 "trsvcid": "42466" 00:20:03.330 }, 00:20:03.330 "auth": { 00:20:03.330 "state": "completed", 00:20:03.331 "digest": "sha384", 00:20:03.331 "dhgroup": "null" 00:20:03.331 } 00:20:03.331 } 00:20:03.331 ]' 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.331 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.588 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:03.588 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.521 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.780 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.038 00:20:05.038 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.038 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.038 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.603 { 00:20:05.603 "cntlid": 53, 00:20:05.603 "qid": 0, 00:20:05.603 "state": "enabled", 00:20:05.603 "thread": "nvmf_tgt_poll_group_000", 00:20:05.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.603 "listen_address": { 00:20:05.603 "trtype": "TCP", 00:20:05.603 "adrfam": "IPv4", 00:20:05.603 "traddr": "10.0.0.2", 00:20:05.603 "trsvcid": "4420" 00:20:05.603 }, 00:20:05.603 "peer_address": { 00:20:05.603 "trtype": "TCP", 00:20:05.603 "adrfam": "IPv4", 00:20:05.603 "traddr": "10.0.0.1", 00:20:05.603 "trsvcid": "42484" 00:20:05.603 }, 00:20:05.603 "auth": { 00:20:05.603 "state": "completed", 00:20:05.603 "digest": "sha384", 00:20:05.603 "dhgroup": "null" 00:20:05.603 } 00:20:05.603 } 00:20:05.603 ]' 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.603 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.861 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:05.861 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.793 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.051 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.309 00:20:07.567 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.567 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.567 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.825 { 00:20:07.825 "cntlid": 55, 00:20:07.825 "qid": 0, 00:20:07.825 "state": "enabled", 00:20:07.825 "thread": "nvmf_tgt_poll_group_000", 00:20:07.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.825 "listen_address": { 00:20:07.825 "trtype": "TCP", 00:20:07.825 "adrfam": "IPv4", 00:20:07.825 "traddr": "10.0.0.2", 00:20:07.825 "trsvcid": "4420" 00:20:07.825 }, 00:20:07.825 "peer_address": { 00:20:07.825 "trtype": "TCP", 00:20:07.825 "adrfam": "IPv4", 00:20:07.825 "traddr": "10.0.0.1", 00:20:07.825 "trsvcid": "42514" 00:20:07.825 }, 00:20:07.825 "auth": { 00:20:07.825 "state": "completed", 00:20:07.825 "digest": "sha384", 00:20:07.825 "dhgroup": "null" 00:20:07.825 } 00:20:07.825 } 00:20:07.825 ]' 00:20:07.825 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.826 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.826 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.826 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.826 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.826 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.826 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.826 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.084 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:08.084 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.018 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.594 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.851 00:20:09.851 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.851 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.851 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.110 { 00:20:10.110 "cntlid": 57, 00:20:10.110 "qid": 0, 00:20:10.110 "state": "enabled", 00:20:10.110 "thread": "nvmf_tgt_poll_group_000", 00:20:10.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.110 "listen_address": { 00:20:10.110 "trtype": "TCP", 00:20:10.110 "adrfam": "IPv4", 00:20:10.110 "traddr": "10.0.0.2", 00:20:10.110 "trsvcid": "4420" 00:20:10.110 }, 00:20:10.110 "peer_address": { 00:20:10.110 "trtype": "TCP", 00:20:10.110 "adrfam": "IPv4", 00:20:10.110 "traddr": "10.0.0.1", 00:20:10.110 "trsvcid": "42550" 00:20:10.110 }, 00:20:10.110 "auth": { 00:20:10.110 "state": "completed", 00:20:10.110 "digest": "sha384", 00:20:10.110 "dhgroup": "ffdhe2048" 00:20:10.110 } 00:20:10.110 } 00:20:10.110 ]' 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.110 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.676 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:10.676 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.609 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.867 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.868 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.176 00:20:12.176 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.176 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.176 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.472 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.472 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.472 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.472 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.472 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.473 { 00:20:12.473 "cntlid": 59, 00:20:12.473 "qid": 0, 00:20:12.473 "state": "enabled", 00:20:12.473 "thread": "nvmf_tgt_poll_group_000", 00:20:12.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.473 "listen_address": { 00:20:12.473 "trtype": "TCP", 00:20:12.473 "adrfam": "IPv4", 00:20:12.473 "traddr": "10.0.0.2", 00:20:12.473 "trsvcid": "4420" 00:20:12.473 }, 00:20:12.473 "peer_address": { 00:20:12.473 "trtype": "TCP", 00:20:12.473 "adrfam": "IPv4", 00:20:12.473 "traddr": "10.0.0.1", 00:20:12.473 "trsvcid": "42576" 00:20:12.473 }, 00:20:12.473 "auth": { 00:20:12.473 "state": "completed", 00:20:12.473 "digest": "sha384", 00:20:12.473 "dhgroup": "ffdhe2048" 00:20:12.473 } 00:20:12.473 } 00:20:12.473 ]' 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.473 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.730 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:12.730 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.664 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.922 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.486 00:20:14.486 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.486 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.486 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.744 { 00:20:14.744 "cntlid": 61, 00:20:14.744 "qid": 0, 00:20:14.744 "state": "enabled", 00:20:14.744 "thread": "nvmf_tgt_poll_group_000", 00:20:14.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.744 "listen_address": { 00:20:14.744 "trtype": "TCP", 00:20:14.744 "adrfam": "IPv4", 00:20:14.744 "traddr": "10.0.0.2", 00:20:14.744 "trsvcid": "4420" 00:20:14.744 }, 00:20:14.744 "peer_address": { 00:20:14.744 "trtype": "TCP", 00:20:14.744 "adrfam": "IPv4", 00:20:14.744 "traddr": "10.0.0.1", 00:20:14.744 "trsvcid": "46938" 00:20:14.744 }, 00:20:14.744 "auth": { 00:20:14.744 "state": "completed", 00:20:14.744 "digest": "sha384", 00:20:14.744 "dhgroup": "ffdhe2048" 00:20:14.744 } 00:20:14.744 } 00:20:14.744 ]' 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.744 18:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.001 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:15.001 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.932 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.189 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.754 00:20:16.754 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.754 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.754 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.754 { 00:20:16.754 "cntlid": 63, 00:20:16.754 "qid": 0, 00:20:16.754 "state": "enabled", 00:20:16.754 "thread": "nvmf_tgt_poll_group_000", 00:20:16.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.754 "listen_address": { 00:20:16.754 "trtype": "TCP", 00:20:16.754 "adrfam": "IPv4", 00:20:16.754 "traddr": "10.0.0.2", 00:20:16.754 "trsvcid": "4420" 00:20:16.754 }, 00:20:16.754 "peer_address": { 00:20:16.754 "trtype": "TCP", 00:20:16.754 "adrfam": "IPv4", 00:20:16.754 "traddr": "10.0.0.1", 00:20:16.754 "trsvcid": "46958" 00:20:16.754 }, 00:20:16.754 "auth": { 00:20:16.754 "state": "completed", 00:20:16.754 "digest": "sha384", 00:20:16.754 "dhgroup": "ffdhe2048" 00:20:16.754 } 00:20:16.754 } 00:20:16.754 ]' 00:20:16.754 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.011 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.269 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:17.269 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.202 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.460 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.026 00:20:19.026 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.026 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.026 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.283 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.283 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.283 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.283 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.284 { 00:20:19.284 "cntlid": 65, 00:20:19.284 "qid": 0, 00:20:19.284 "state": "enabled", 00:20:19.284 "thread": "nvmf_tgt_poll_group_000", 00:20:19.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.284 "listen_address": { 00:20:19.284 "trtype": "TCP", 00:20:19.284 "adrfam": "IPv4", 00:20:19.284 "traddr": "10.0.0.2", 00:20:19.284 "trsvcid": "4420" 00:20:19.284 }, 00:20:19.284 "peer_address": { 00:20:19.284 "trtype": "TCP", 00:20:19.284 "adrfam": "IPv4", 00:20:19.284 "traddr": "10.0.0.1", 00:20:19.284 "trsvcid": "46996" 00:20:19.284 }, 00:20:19.284 "auth": { 00:20:19.284 "state": "completed", 00:20:19.284 "digest": "sha384", 00:20:19.284 "dhgroup": "ffdhe3072" 00:20:19.284 } 00:20:19.284 } 00:20:19.284 ]' 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.284 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.542 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:19.542 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.476 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.734 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.992 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.992 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.992 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.992 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.249 00:20:21.249 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.249 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.249 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.508 { 00:20:21.508 "cntlid": 67, 00:20:21.508 "qid": 0, 00:20:21.508 "state": "enabled", 00:20:21.508 "thread": "nvmf_tgt_poll_group_000", 00:20:21.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.508 "listen_address": { 00:20:21.508 "trtype": "TCP", 00:20:21.508 "adrfam": "IPv4", 00:20:21.508 "traddr": "10.0.0.2", 00:20:21.508 "trsvcid": "4420" 00:20:21.508 }, 00:20:21.508 "peer_address": { 00:20:21.508 "trtype": "TCP", 00:20:21.508 "adrfam": "IPv4", 00:20:21.508 "traddr": "10.0.0.1", 00:20:21.508 "trsvcid": "47012" 00:20:21.508 }, 00:20:21.508 "auth": { 00:20:21.508 "state": "completed", 00:20:21.508 "digest": "sha384", 00:20:21.508 "dhgroup": "ffdhe3072" 00:20:21.508 } 00:20:21.508 } 00:20:21.508 ]' 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.508 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.766 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:21.766 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:22.699 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.957 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.215 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.216 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.473 00:20:23.473 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.473 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.473 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.731 { 00:20:23.731 "cntlid": 69, 00:20:23.731 "qid": 0, 00:20:23.731 "state": "enabled", 00:20:23.731 "thread": "nvmf_tgt_poll_group_000", 00:20:23.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.731 "listen_address": { 00:20:23.731 "trtype": "TCP", 00:20:23.731 "adrfam": "IPv4", 00:20:23.731 "traddr": "10.0.0.2", 00:20:23.731 "trsvcid": "4420" 00:20:23.731 }, 00:20:23.731 "peer_address": { 00:20:23.731 "trtype": "TCP", 00:20:23.731 "adrfam": "IPv4", 00:20:23.731 "traddr": "10.0.0.1", 00:20:23.731 "trsvcid": "52954" 00:20:23.731 }, 00:20:23.731 "auth": { 00:20:23.731 "state": "completed", 00:20:23.731 "digest": "sha384", 00:20:23.731 "dhgroup": "ffdhe3072" 00:20:23.731 } 00:20:23.731 } 00:20:23.731 ]' 00:20:23.731 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.731 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.732 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.732 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.732 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.989 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.989 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.989 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.247 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:24.247 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.180 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.438 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.696 00:20:25.696 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.696 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.696 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.954 { 00:20:25.954 "cntlid": 71, 00:20:25.954 "qid": 0, 00:20:25.954 "state": "enabled", 00:20:25.954 "thread": "nvmf_tgt_poll_group_000", 00:20:25.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.954 "listen_address": { 00:20:25.954 "trtype": "TCP", 00:20:25.954 "adrfam": "IPv4", 00:20:25.954 "traddr": "10.0.0.2", 00:20:25.954 "trsvcid": "4420" 00:20:25.954 }, 00:20:25.954 "peer_address": { 00:20:25.954 "trtype": "TCP", 00:20:25.954 "adrfam": "IPv4", 00:20:25.954 "traddr": "10.0.0.1", 00:20:25.954 "trsvcid": "52976" 00:20:25.954 }, 00:20:25.954 "auth": { 00:20:25.954 "state": "completed", 00:20:25.954 "digest": "sha384", 00:20:25.954 "dhgroup": "ffdhe3072" 00:20:25.954 } 00:20:25.954 } 00:20:25.954 ]' 00:20:25.954 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.212 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.470 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:26.470 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.403 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.661 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.662 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.228 00:20:28.228 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.228 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.228 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.485 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.485 { 00:20:28.485 "cntlid": 73, 00:20:28.485 "qid": 0, 00:20:28.485 "state": "enabled", 00:20:28.485 "thread": "nvmf_tgt_poll_group_000", 00:20:28.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.485 "listen_address": { 00:20:28.485 "trtype": "TCP", 00:20:28.485 "adrfam": "IPv4", 00:20:28.485 "traddr": "10.0.0.2", 00:20:28.485 "trsvcid": "4420" 00:20:28.485 }, 00:20:28.485 "peer_address": { 00:20:28.485 "trtype": "TCP", 00:20:28.485 "adrfam": "IPv4", 00:20:28.486 "traddr": "10.0.0.1", 00:20:28.486 "trsvcid": "53002" 00:20:28.486 }, 00:20:28.486 "auth": { 00:20:28.486 "state": "completed", 00:20:28.486 "digest": "sha384", 00:20:28.486 "dhgroup": "ffdhe4096" 00:20:28.486 } 00:20:28.486 } 00:20:28.486 ]' 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.486 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.751 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:28.751 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:30.128 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.128 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.129 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.694 00:20:30.694 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.694 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.694 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.951 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.951 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.951 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.952 { 00:20:30.952 "cntlid": 75, 00:20:30.952 "qid": 0, 00:20:30.952 "state": "enabled", 00:20:30.952 "thread": "nvmf_tgt_poll_group_000", 00:20:30.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.952 "listen_address": { 00:20:30.952 "trtype": "TCP", 00:20:30.952 "adrfam": "IPv4", 00:20:30.952 "traddr": "10.0.0.2", 00:20:30.952 "trsvcid": "4420" 00:20:30.952 }, 00:20:30.952 "peer_address": { 00:20:30.952 "trtype": "TCP", 00:20:30.952 "adrfam": "IPv4", 00:20:30.952 "traddr": "10.0.0.1", 00:20:30.952 "trsvcid": "53036" 00:20:30.952 }, 00:20:30.952 "auth": { 00:20:30.952 "state": "completed", 00:20:30.952 "digest": "sha384", 00:20:30.952 "dhgroup": "ffdhe4096" 00:20:30.952 } 00:20:30.952 } 00:20:30.952 ]' 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.952 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.209 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:31.209 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:32.141 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.142 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.400 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.967 00:20:32.967 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.967 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.967 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.225 { 00:20:33.225 "cntlid": 77, 00:20:33.225 "qid": 0, 00:20:33.225 "state": "enabled", 00:20:33.225 "thread": "nvmf_tgt_poll_group_000", 00:20:33.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.225 "listen_address": { 00:20:33.225 "trtype": "TCP", 00:20:33.225 "adrfam": "IPv4", 00:20:33.225 "traddr": "10.0.0.2", 00:20:33.225 "trsvcid": "4420" 00:20:33.225 }, 00:20:33.225 "peer_address": { 00:20:33.225 "trtype": "TCP", 00:20:33.225 "adrfam": "IPv4", 00:20:33.225 "traddr": "10.0.0.1", 00:20:33.225 "trsvcid": "34166" 00:20:33.225 }, 00:20:33.225 "auth": { 00:20:33.225 "state": "completed", 00:20:33.225 "digest": "sha384", 00:20:33.225 "dhgroup": "ffdhe4096" 00:20:33.225 } 00:20:33.225 } 00:20:33.225 ]' 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.225 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:33.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:34.857 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.858 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.858 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.425 00:20:35.425 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.425 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.425 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.683 { 00:20:35.683 "cntlid": 79, 00:20:35.683 "qid": 0, 00:20:35.683 "state": "enabled", 00:20:35.683 "thread": "nvmf_tgt_poll_group_000", 00:20:35.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.683 "listen_address": { 00:20:35.683 "trtype": "TCP", 00:20:35.683 "adrfam": "IPv4", 00:20:35.683 "traddr": "10.0.0.2", 00:20:35.683 "trsvcid": "4420" 00:20:35.683 }, 00:20:35.683 "peer_address": { 00:20:35.683 "trtype": "TCP", 00:20:35.683 "adrfam": "IPv4", 00:20:35.683 "traddr": "10.0.0.1", 00:20:35.683 "trsvcid": "34198" 00:20:35.683 }, 00:20:35.683 "auth": { 00:20:35.683 "state": "completed", 00:20:35.683 "digest": "sha384", 00:20:35.683 "dhgroup": "ffdhe4096" 00:20:35.683 } 00:20:35.683 } 00:20:35.683 ]' 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.683 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.942 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:35.942 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:36.875 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.875 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.875 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.875 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.134 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.134 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.134 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.134 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.134 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.391 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.957 00:20:37.957 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.957 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.957 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.216 { 00:20:38.216 "cntlid": 81, 00:20:38.216 "qid": 0, 00:20:38.216 "state": "enabled", 00:20:38.216 "thread": "nvmf_tgt_poll_group_000", 00:20:38.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.216 "listen_address": { 00:20:38.216 "trtype": "TCP", 00:20:38.216 "adrfam": "IPv4", 00:20:38.216 "traddr": "10.0.0.2", 00:20:38.216 "trsvcid": "4420" 00:20:38.216 }, 00:20:38.216 "peer_address": { 00:20:38.216 "trtype": "TCP", 00:20:38.216 "adrfam": "IPv4", 00:20:38.216 "traddr": "10.0.0.1", 00:20:38.216 "trsvcid": "34230" 00:20:38.216 }, 00:20:38.216 "auth": { 00:20:38.216 "state": "completed", 00:20:38.216 "digest": "sha384", 00:20:38.216 "dhgroup": "ffdhe6144" 00:20:38.216 } 00:20:38.216 } 00:20:38.216 ]' 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.216 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.780 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:38.780 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.714 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.972 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.538 00:20:40.538 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.538 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.538 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.796 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.796 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.796 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.796 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.796 { 00:20:40.796 "cntlid": 83, 00:20:40.796 "qid": 0, 00:20:40.796 "state": "enabled", 00:20:40.796 "thread": "nvmf_tgt_poll_group_000", 00:20:40.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.796 "listen_address": { 00:20:40.796 "trtype": "TCP", 00:20:40.796 "adrfam": "IPv4", 00:20:40.796 "traddr": "10.0.0.2", 00:20:40.796 "trsvcid": "4420" 00:20:40.796 }, 00:20:40.796 "peer_address": { 00:20:40.796 "trtype": "TCP", 00:20:40.796 "adrfam": "IPv4", 00:20:40.796 "traddr": "10.0.0.1", 00:20:40.796 "trsvcid": "34252" 00:20:40.796 }, 00:20:40.796 "auth": { 00:20:40.796 "state": "completed", 00:20:40.796 "digest": "sha384", 00:20:40.796 "dhgroup": "ffdhe6144" 00:20:40.796 } 00:20:40.796 } 00:20:40.796 ]' 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.796 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.076 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:41.076 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.492 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.057 00:20:43.057 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.057 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.057 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.315 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.315 { 00:20:43.315 "cntlid": 85, 00:20:43.315 "qid": 0, 00:20:43.315 "state": "enabled", 00:20:43.315 "thread": "nvmf_tgt_poll_group_000", 00:20:43.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.315 "listen_address": { 00:20:43.315 "trtype": "TCP", 00:20:43.315 "adrfam": "IPv4", 00:20:43.315 "traddr": "10.0.0.2", 00:20:43.315 "trsvcid": "4420" 00:20:43.315 }, 00:20:43.315 "peer_address": { 00:20:43.316 "trtype": "TCP", 00:20:43.316 "adrfam": "IPv4", 00:20:43.316 "traddr": "10.0.0.1", 00:20:43.316 "trsvcid": "46370" 00:20:43.316 }, 00:20:43.316 "auth": { 00:20:43.316 "state": "completed", 00:20:43.316 "digest": "sha384", 00:20:43.316 "dhgroup": "ffdhe6144" 00:20:43.316 } 00:20:43.316 } 00:20:43.316 ]' 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.316 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.881 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:43.881 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.814 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.071 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.072 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.072 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.072 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.637 00:20:45.637 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.637 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.637 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.895 { 00:20:45.895 "cntlid": 87, 00:20:45.895 "qid": 0, 00:20:45.895 "state": "enabled", 00:20:45.895 "thread": "nvmf_tgt_poll_group_000", 00:20:45.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.895 "listen_address": { 00:20:45.895 "trtype": "TCP", 00:20:45.895 "adrfam": "IPv4", 00:20:45.895 "traddr": "10.0.0.2", 00:20:45.895 "trsvcid": "4420" 00:20:45.895 }, 00:20:45.895 "peer_address": { 00:20:45.895 "trtype": "TCP", 00:20:45.895 "adrfam": "IPv4", 00:20:45.895 "traddr": "10.0.0.1", 00:20:45.895 "trsvcid": "46394" 00:20:45.895 }, 00:20:45.895 "auth": { 00:20:45.895 "state": "completed", 00:20:45.895 "digest": "sha384", 00:20:45.895 "dhgroup": "ffdhe6144" 00:20:45.895 } 00:20:45.895 } 00:20:45.895 ]' 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.153 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:46.153 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:47.086 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.344 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.602 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.588 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.588 { 00:20:48.588 "cntlid": 89, 00:20:48.588 "qid": 0, 00:20:48.588 "state": "enabled", 00:20:48.588 "thread": "nvmf_tgt_poll_group_000", 00:20:48.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.588 "listen_address": { 00:20:48.588 "trtype": "TCP", 00:20:48.588 "adrfam": "IPv4", 00:20:48.588 "traddr": "10.0.0.2", 00:20:48.588 "trsvcid": "4420" 00:20:48.588 }, 00:20:48.588 "peer_address": { 00:20:48.588 "trtype": "TCP", 00:20:48.588 "adrfam": "IPv4", 00:20:48.588 "traddr": "10.0.0.1", 00:20:48.588 "trsvcid": "46408" 00:20:48.588 }, 00:20:48.588 "auth": { 00:20:48.588 "state": "completed", 00:20:48.588 "digest": "sha384", 00:20:48.588 "dhgroup": "ffdhe8192" 00:20:48.588 } 00:20:48.588 } 00:20:48.588 ]' 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.588 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.846 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.846 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.846 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.846 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.846 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.103 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:49.104 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.037 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.296 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.229 00:20:51.229 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.229 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.229 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.487 { 00:20:51.487 "cntlid": 91, 00:20:51.487 "qid": 0, 00:20:51.487 "state": "enabled", 00:20:51.487 "thread": "nvmf_tgt_poll_group_000", 00:20:51.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.487 "listen_address": { 00:20:51.487 "trtype": "TCP", 00:20:51.487 "adrfam": "IPv4", 00:20:51.487 "traddr": "10.0.0.2", 00:20:51.487 "trsvcid": "4420" 00:20:51.487 }, 00:20:51.487 "peer_address": { 00:20:51.487 "trtype": "TCP", 00:20:51.487 "adrfam": "IPv4", 00:20:51.487 "traddr": "10.0.0.1", 00:20:51.487 "trsvcid": "46428" 00:20:51.487 }, 00:20:51.487 "auth": { 00:20:51.487 "state": "completed", 00:20:51.487 "digest": "sha384", 00:20:51.487 "dhgroup": "ffdhe8192" 00:20:51.487 } 00:20:51.487 } 00:20:51.487 ]' 00:20:51.487 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.745 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.004 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:52.004 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.941 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.199 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.132 00:20:54.132 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.132 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.132 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.390 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.390 { 00:20:54.390 "cntlid": 93, 00:20:54.390 "qid": 0, 00:20:54.390 "state": "enabled", 00:20:54.390 "thread": "nvmf_tgt_poll_group_000", 00:20:54.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.390 "listen_address": { 00:20:54.390 "trtype": "TCP", 00:20:54.391 "adrfam": "IPv4", 00:20:54.391 "traddr": "10.0.0.2", 00:20:54.391 "trsvcid": "4420" 00:20:54.391 }, 00:20:54.391 "peer_address": { 00:20:54.391 "trtype": "TCP", 00:20:54.391 "adrfam": "IPv4", 00:20:54.391 "traddr": "10.0.0.1", 00:20:54.391 "trsvcid": "52988" 00:20:54.391 }, 00:20:54.391 "auth": { 00:20:54.391 "state": "completed", 00:20:54.391 "digest": "sha384", 00:20:54.391 "dhgroup": "ffdhe8192" 00:20:54.391 } 00:20:54.391 } 00:20:54.391 ]' 00:20:54.391 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.391 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.391 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.391 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.391 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.649 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.649 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.649 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.907 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:54.907 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:20:55.841 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.841 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.100 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.033 00:20:57.033 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.033 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.033 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.291 { 00:20:57.291 "cntlid": 95, 00:20:57.291 "qid": 0, 00:20:57.291 "state": "enabled", 00:20:57.291 "thread": "nvmf_tgt_poll_group_000", 00:20:57.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.291 "listen_address": { 00:20:57.291 "trtype": "TCP", 00:20:57.291 "adrfam": "IPv4", 00:20:57.291 "traddr": "10.0.0.2", 00:20:57.291 "trsvcid": "4420" 00:20:57.291 }, 00:20:57.291 "peer_address": { 00:20:57.291 "trtype": "TCP", 00:20:57.291 "adrfam": "IPv4", 00:20:57.291 "traddr": "10.0.0.1", 00:20:57.291 "trsvcid": "53008" 00:20:57.291 }, 00:20:57.291 "auth": { 00:20:57.291 "state": "completed", 00:20:57.291 "digest": "sha384", 00:20:57.291 "dhgroup": "ffdhe8192" 00:20:57.291 } 00:20:57.291 } 00:20:57.291 ]' 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.291 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.549 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:57.549 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.923 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.923 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.924 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.924 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.924 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.489 00:20:59.489 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.489 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.489 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.747 { 00:20:59.747 "cntlid": 97, 00:20:59.747 "qid": 0, 00:20:59.747 "state": "enabled", 00:20:59.747 "thread": "nvmf_tgt_poll_group_000", 00:20:59.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.747 "listen_address": { 00:20:59.747 "trtype": "TCP", 00:20:59.747 "adrfam": "IPv4", 00:20:59.747 "traddr": "10.0.0.2", 00:20:59.747 "trsvcid": "4420" 00:20:59.747 }, 00:20:59.747 "peer_address": { 00:20:59.747 "trtype": "TCP", 00:20:59.747 "adrfam": "IPv4", 00:20:59.747 "traddr": "10.0.0.1", 00:20:59.747 "trsvcid": "53038" 00:20:59.747 }, 00:20:59.747 "auth": { 00:20:59.747 "state": "completed", 00:20:59.747 "digest": "sha512", 00:20:59.747 "dhgroup": "null" 00:20:59.747 } 00:20:59.747 } 00:20:59.747 ]' 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.747 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.005 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:00.005 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:00.945 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.945 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.946 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.204 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.462 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.462 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.462 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.462 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.721 00:21:01.721 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.721 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.721 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.983 { 00:21:01.983 "cntlid": 99, 00:21:01.983 "qid": 0, 00:21:01.983 "state": "enabled", 00:21:01.983 "thread": "nvmf_tgt_poll_group_000", 00:21:01.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.983 "listen_address": { 00:21:01.983 "trtype": "TCP", 00:21:01.983 "adrfam": "IPv4", 00:21:01.983 "traddr": "10.0.0.2", 00:21:01.983 "trsvcid": "4420" 00:21:01.983 }, 00:21:01.983 "peer_address": { 00:21:01.983 "trtype": "TCP", 00:21:01.983 "adrfam": "IPv4", 00:21:01.983 "traddr": "10.0.0.1", 00:21:01.983 "trsvcid": "53072" 00:21:01.983 }, 00:21:01.983 "auth": { 00:21:01.983 "state": "completed", 00:21:01.983 "digest": "sha512", 00:21:01.983 "dhgroup": "null" 00:21:01.983 } 00:21:01.983 } 00:21:01.983 ]' 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.983 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.241 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:02.242 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:03.175 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.432 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.690 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.948 00:21:03.948 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.948 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.948 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.206 { 00:21:04.206 "cntlid": 101, 00:21:04.206 "qid": 0, 00:21:04.206 "state": "enabled", 00:21:04.206 "thread": "nvmf_tgt_poll_group_000", 00:21:04.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.206 "listen_address": { 00:21:04.206 "trtype": "TCP", 00:21:04.206 "adrfam": "IPv4", 00:21:04.206 "traddr": "10.0.0.2", 00:21:04.206 "trsvcid": "4420" 00:21:04.206 }, 00:21:04.206 "peer_address": { 00:21:04.206 "trtype": "TCP", 00:21:04.206 "adrfam": "IPv4", 00:21:04.206 "traddr": "10.0.0.1", 00:21:04.206 "trsvcid": "37370" 00:21:04.206 }, 00:21:04.206 "auth": { 00:21:04.206 "state": "completed", 00:21:04.206 "digest": "sha512", 00:21:04.206 "dhgroup": "null" 00:21:04.206 } 00:21:04.206 } 00:21:04.206 ]' 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:04.206 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.464 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.464 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.464 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.722 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:04.722 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.655 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.913 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.170 00:21:06.170 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.170 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.171 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.428 { 00:21:06.428 "cntlid": 103, 00:21:06.428 "qid": 0, 00:21:06.428 "state": "enabled", 00:21:06.428 "thread": "nvmf_tgt_poll_group_000", 00:21:06.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.428 "listen_address": { 00:21:06.428 "trtype": "TCP", 00:21:06.428 "adrfam": "IPv4", 00:21:06.428 "traddr": "10.0.0.2", 00:21:06.428 "trsvcid": "4420" 00:21:06.428 }, 00:21:06.428 "peer_address": { 00:21:06.428 "trtype": "TCP", 00:21:06.428 "adrfam": "IPv4", 00:21:06.428 "traddr": "10.0.0.1", 00:21:06.428 "trsvcid": "37386" 00:21:06.428 }, 00:21:06.428 "auth": { 00:21:06.428 "state": "completed", 00:21:06.428 "digest": "sha512", 00:21:06.428 "dhgroup": "null" 00:21:06.428 } 00:21:06.428 } 00:21:06.428 ]' 00:21:06.428 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.686 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.943 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:06.943 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.876 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.133 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.391 00:21:08.391 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.391 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.391 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.648 { 00:21:08.648 "cntlid": 105, 00:21:08.648 "qid": 0, 00:21:08.648 "state": "enabled", 00:21:08.648 "thread": "nvmf_tgt_poll_group_000", 00:21:08.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.648 "listen_address": { 00:21:08.648 "trtype": "TCP", 00:21:08.648 "adrfam": "IPv4", 00:21:08.648 "traddr": "10.0.0.2", 00:21:08.648 "trsvcid": "4420" 00:21:08.648 }, 00:21:08.648 "peer_address": { 00:21:08.648 "trtype": "TCP", 00:21:08.648 "adrfam": "IPv4", 00:21:08.648 "traddr": "10.0.0.1", 00:21:08.648 "trsvcid": "37428" 00:21:08.648 }, 00:21:08.648 "auth": { 00:21:08.648 "state": "completed", 00:21:08.648 "digest": "sha512", 00:21:08.648 "dhgroup": "ffdhe2048" 00:21:08.648 } 00:21:08.648 } 00:21:08.648 ]' 00:21:08.648 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.906 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.163 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:09.164 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.096 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.355 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.613 00:21:10.613 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.613 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.613 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.178 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.178 { 00:21:11.178 "cntlid": 107, 00:21:11.178 "qid": 0, 00:21:11.178 "state": "enabled", 00:21:11.179 "thread": "nvmf_tgt_poll_group_000", 00:21:11.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.179 "listen_address": { 00:21:11.179 "trtype": "TCP", 00:21:11.179 "adrfam": "IPv4", 00:21:11.179 "traddr": "10.0.0.2", 00:21:11.179 "trsvcid": "4420" 00:21:11.179 }, 00:21:11.179 "peer_address": { 00:21:11.179 "trtype": "TCP", 00:21:11.179 "adrfam": "IPv4", 00:21:11.179 "traddr": "10.0.0.1", 00:21:11.179 "trsvcid": "37466" 00:21:11.179 }, 00:21:11.179 "auth": { 00:21:11.179 "state": "completed", 00:21:11.179 "digest": "sha512", 00:21:11.179 "dhgroup": "ffdhe2048" 00:21:11.179 } 00:21:11.179 } 00:21:11.179 ]' 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.179 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.436 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:11.436 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.637 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.957 00:21:12.957 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.957 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.957 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.240 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.241 { 00:21:13.241 "cntlid": 109, 00:21:13.241 "qid": 0, 00:21:13.241 "state": "enabled", 00:21:13.241 "thread": "nvmf_tgt_poll_group_000", 00:21:13.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.241 "listen_address": { 00:21:13.241 "trtype": "TCP", 00:21:13.241 "adrfam": "IPv4", 00:21:13.241 "traddr": "10.0.0.2", 00:21:13.241 "trsvcid": "4420" 00:21:13.241 }, 00:21:13.241 "peer_address": { 00:21:13.241 "trtype": "TCP", 00:21:13.241 "adrfam": "IPv4", 00:21:13.241 "traddr": "10.0.0.1", 00:21:13.241 "trsvcid": "59874" 00:21:13.241 }, 00:21:13.241 "auth": { 00:21:13.241 "state": "completed", 00:21:13.241 "digest": "sha512", 00:21:13.241 "dhgroup": "ffdhe2048" 00:21:13.241 } 00:21:13.241 } 00:21:13.241 ]' 00:21:13.241 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.499 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.757 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:13.757 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.692 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.950 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.208 00:21:15.208 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.208 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.208 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.466 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.466 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.466 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.466 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.724 { 00:21:15.724 "cntlid": 111, 00:21:15.724 "qid": 0, 00:21:15.724 "state": "enabled", 00:21:15.724 "thread": "nvmf_tgt_poll_group_000", 00:21:15.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.724 "listen_address": { 00:21:15.724 "trtype": "TCP", 00:21:15.724 "adrfam": "IPv4", 00:21:15.724 "traddr": "10.0.0.2", 00:21:15.724 "trsvcid": "4420" 00:21:15.724 }, 00:21:15.724 "peer_address": { 00:21:15.724 "trtype": "TCP", 00:21:15.724 "adrfam": "IPv4", 00:21:15.724 "traddr": "10.0.0.1", 00:21:15.724 "trsvcid": "59910" 00:21:15.724 }, 00:21:15.724 "auth": { 00:21:15.724 "state": "completed", 00:21:15.724 "digest": "sha512", 00:21:15.724 "dhgroup": "ffdhe2048" 00:21:15.724 } 00:21:15.724 } 00:21:15.724 ]' 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.724 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.725 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.725 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.725 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.725 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.983 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:15.983 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.917 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.176 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.434 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.692 00:21:17.692 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.692 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.692 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.950 { 00:21:17.950 "cntlid": 113, 00:21:17.950 "qid": 0, 00:21:17.950 "state": "enabled", 00:21:17.950 "thread": "nvmf_tgt_poll_group_000", 00:21:17.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.950 "listen_address": { 00:21:17.950 "trtype": "TCP", 00:21:17.950 "adrfam": "IPv4", 00:21:17.950 "traddr": "10.0.0.2", 00:21:17.950 "trsvcid": "4420" 00:21:17.950 }, 00:21:17.950 "peer_address": { 00:21:17.950 "trtype": "TCP", 00:21:17.950 "adrfam": "IPv4", 00:21:17.950 "traddr": "10.0.0.1", 00:21:17.950 "trsvcid": "59922" 00:21:17.950 }, 00:21:17.950 "auth": { 00:21:17.950 "state": "completed", 00:21:17.950 "digest": "sha512", 00:21:17.950 "dhgroup": "ffdhe3072" 00:21:17.950 } 00:21:17.950 } 00:21:17.950 ]' 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.950 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.517 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:18.517 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.452 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.710 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.969 00:21:19.969 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.969 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.969 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.227 { 00:21:20.227 "cntlid": 115, 00:21:20.227 "qid": 0, 00:21:20.227 "state": "enabled", 00:21:20.227 "thread": "nvmf_tgt_poll_group_000", 00:21:20.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.227 "listen_address": { 00:21:20.227 "trtype": "TCP", 00:21:20.227 "adrfam": "IPv4", 00:21:20.227 "traddr": "10.0.0.2", 00:21:20.227 "trsvcid": "4420" 00:21:20.227 }, 00:21:20.227 "peer_address": { 00:21:20.227 "trtype": "TCP", 00:21:20.227 "adrfam": "IPv4", 00:21:20.227 "traddr": "10.0.0.1", 00:21:20.227 "trsvcid": "59964" 00:21:20.227 }, 00:21:20.227 "auth": { 00:21:20.227 "state": "completed", 00:21:20.227 "digest": "sha512", 00:21:20.227 "dhgroup": "ffdhe3072" 00:21:20.227 } 00:21:20.227 } 00:21:20.227 ]' 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.227 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.485 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.485 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.485 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.485 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.485 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.744 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:20.744 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.677 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.193 00:21:22.193 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.193 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.193 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.451 { 00:21:22.451 "cntlid": 117, 00:21:22.451 "qid": 0, 00:21:22.451 "state": "enabled", 00:21:22.451 "thread": "nvmf_tgt_poll_group_000", 00:21:22.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.451 "listen_address": { 00:21:22.451 "trtype": "TCP", 00:21:22.451 "adrfam": "IPv4", 00:21:22.451 "traddr": "10.0.0.2", 00:21:22.451 "trsvcid": "4420" 00:21:22.451 }, 00:21:22.451 "peer_address": { 00:21:22.451 "trtype": "TCP", 00:21:22.451 "adrfam": "IPv4", 00:21:22.451 "traddr": "10.0.0.1", 00:21:22.451 "trsvcid": "59986" 00:21:22.451 }, 00:21:22.451 "auth": { 00:21:22.451 "state": "completed", 00:21:22.451 "digest": "sha512", 00:21:22.451 "dhgroup": "ffdhe3072" 00:21:22.451 } 00:21:22.451 } 00:21:22.451 ]' 00:21:22.451 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.710 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.968 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:22.968 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.902 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.160 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.725 00:21:24.725 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.725 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.725 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.983 { 00:21:24.983 "cntlid": 119, 00:21:24.983 "qid": 0, 00:21:24.983 "state": "enabled", 00:21:24.983 "thread": "nvmf_tgt_poll_group_000", 00:21:24.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.983 "listen_address": { 00:21:24.983 "trtype": "TCP", 00:21:24.983 "adrfam": "IPv4", 00:21:24.983 "traddr": "10.0.0.2", 00:21:24.983 "trsvcid": "4420" 00:21:24.983 }, 00:21:24.983 "peer_address": { 00:21:24.983 "trtype": "TCP", 00:21:24.983 "adrfam": "IPv4", 00:21:24.983 "traddr": "10.0.0.1", 00:21:24.983 "trsvcid": "54886" 00:21:24.983 }, 00:21:24.983 "auth": { 00:21:24.983 "state": "completed", 00:21:24.983 "digest": "sha512", 00:21:24.983 "dhgroup": "ffdhe3072" 00:21:24.983 } 00:21:24.983 } 00:21:24.983 ]' 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.983 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.241 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:25.241 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:26.174 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.432 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.690 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.948 00:21:26.948 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.948 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.948 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.206 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.206 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.206 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.206 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.207 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.207 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.207 { 00:21:27.207 "cntlid": 121, 00:21:27.207 "qid": 0, 00:21:27.207 "state": "enabled", 00:21:27.207 "thread": "nvmf_tgt_poll_group_000", 00:21:27.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.207 "listen_address": { 00:21:27.207 "trtype": "TCP", 00:21:27.207 "adrfam": "IPv4", 00:21:27.207 "traddr": "10.0.0.2", 00:21:27.207 "trsvcid": "4420" 00:21:27.207 }, 00:21:27.207 "peer_address": { 00:21:27.207 "trtype": "TCP", 00:21:27.207 "adrfam": "IPv4", 00:21:27.207 "traddr": "10.0.0.1", 00:21:27.207 "trsvcid": "54912" 00:21:27.207 }, 00:21:27.207 "auth": { 00:21:27.207 "state": "completed", 00:21:27.207 "digest": "sha512", 00:21:27.207 "dhgroup": "ffdhe4096" 00:21:27.207 } 00:21:27.207 } 00:21:27.207 ]' 00:21:27.207 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.464 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.722 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:27.722 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.654 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.913 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.479 00:21:29.479 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.479 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.479 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.737 { 00:21:29.737 "cntlid": 123, 00:21:29.737 "qid": 0, 00:21:29.737 "state": "enabled", 00:21:29.737 "thread": "nvmf_tgt_poll_group_000", 00:21:29.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.737 "listen_address": { 00:21:29.737 "trtype": "TCP", 00:21:29.737 "adrfam": "IPv4", 00:21:29.737 "traddr": "10.0.0.2", 00:21:29.737 "trsvcid": "4420" 00:21:29.737 }, 00:21:29.737 "peer_address": { 00:21:29.737 "trtype": "TCP", 00:21:29.737 "adrfam": "IPv4", 00:21:29.737 "traddr": "10.0.0.1", 00:21:29.737 "trsvcid": "54940" 00:21:29.737 }, 00:21:29.737 "auth": { 00:21:29.737 "state": "completed", 00:21:29.737 "digest": "sha512", 00:21:29.737 "dhgroup": "ffdhe4096" 00:21:29.737 } 00:21:29.737 } 00:21:29.737 ]' 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.737 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.995 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:29.995 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:30.928 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.186 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.443 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.706 00:21:31.706 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.706 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.706 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.964 { 00:21:31.964 "cntlid": 125, 00:21:31.964 "qid": 0, 00:21:31.964 "state": "enabled", 00:21:31.964 "thread": "nvmf_tgt_poll_group_000", 00:21:31.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.964 "listen_address": { 00:21:31.964 "trtype": "TCP", 00:21:31.964 "adrfam": "IPv4", 00:21:31.964 "traddr": "10.0.0.2", 00:21:31.964 "trsvcid": "4420" 00:21:31.964 }, 00:21:31.964 "peer_address": { 00:21:31.964 "trtype": "TCP", 00:21:31.964 "adrfam": "IPv4", 00:21:31.964 "traddr": "10.0.0.1", 00:21:31.964 "trsvcid": "54976" 00:21:31.964 }, 00:21:31.964 "auth": { 00:21:31.964 "state": "completed", 00:21:31.964 "digest": "sha512", 00:21:31.964 "dhgroup": "ffdhe4096" 00:21:31.964 } 00:21:31.964 } 00:21:31.964 ]' 00:21:31.964 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.221 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.479 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:32.479 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.414 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.672 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.238 00:21:34.238 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.239 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.239 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.497 { 00:21:34.497 "cntlid": 127, 00:21:34.497 "qid": 0, 00:21:34.497 "state": "enabled", 00:21:34.497 "thread": "nvmf_tgt_poll_group_000", 00:21:34.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.497 "listen_address": { 00:21:34.497 "trtype": "TCP", 00:21:34.497 "adrfam": "IPv4", 00:21:34.497 "traddr": "10.0.0.2", 00:21:34.497 "trsvcid": "4420" 00:21:34.497 }, 00:21:34.497 "peer_address": { 00:21:34.497 "trtype": "TCP", 00:21:34.497 "adrfam": "IPv4", 00:21:34.497 "traddr": "10.0.0.1", 00:21:34.497 "trsvcid": "60982" 00:21:34.497 }, 00:21:34.497 "auth": { 00:21:34.497 "state": "completed", 00:21:34.497 "digest": "sha512", 00:21:34.497 "dhgroup": "ffdhe4096" 00:21:34.497 } 00:21:34.497 } 00:21:34.497 ]' 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.497 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.755 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:34.755 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:35.689 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.689 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.689 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.689 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.689 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.689 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.689 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.689 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.690 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.256 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.514 00:21:36.772 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.772 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.772 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.030 { 00:21:37.030 "cntlid": 129, 00:21:37.030 "qid": 0, 00:21:37.030 "state": "enabled", 00:21:37.030 "thread": "nvmf_tgt_poll_group_000", 00:21:37.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.030 "listen_address": { 00:21:37.030 "trtype": "TCP", 00:21:37.030 "adrfam": "IPv4", 00:21:37.030 "traddr": "10.0.0.2", 00:21:37.030 "trsvcid": "4420" 00:21:37.030 }, 00:21:37.030 "peer_address": { 00:21:37.030 "trtype": "TCP", 00:21:37.030 "adrfam": "IPv4", 00:21:37.030 "traddr": "10.0.0.1", 00:21:37.030 "trsvcid": "60992" 00:21:37.030 }, 00:21:37.030 "auth": { 00:21:37.030 "state": "completed", 00:21:37.030 "digest": "sha512", 00:21:37.030 "dhgroup": "ffdhe6144" 00:21:37.030 } 00:21:37.030 } 00:21:37.030 ]' 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.030 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.290 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:37.290 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.223 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.790 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.357 00:21:39.357 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.357 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.357 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.614 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.614 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.614 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.614 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.615 { 00:21:39.615 "cntlid": 131, 00:21:39.615 "qid": 0, 00:21:39.615 "state": "enabled", 00:21:39.615 "thread": "nvmf_tgt_poll_group_000", 00:21:39.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.615 "listen_address": { 00:21:39.615 "trtype": "TCP", 00:21:39.615 "adrfam": "IPv4", 00:21:39.615 "traddr": "10.0.0.2", 00:21:39.615 "trsvcid": "4420" 00:21:39.615 }, 00:21:39.615 "peer_address": { 00:21:39.615 "trtype": "TCP", 00:21:39.615 "adrfam": "IPv4", 00:21:39.615 "traddr": "10.0.0.1", 00:21:39.615 "trsvcid": "32800" 00:21:39.615 }, 00:21:39.615 "auth": { 00:21:39.615 "state": "completed", 00:21:39.615 "digest": "sha512", 00:21:39.615 "dhgroup": "ffdhe6144" 00:21:39.615 } 00:21:39.615 } 00:21:39.615 ]' 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.615 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.872 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:39.872 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.805 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.062 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:41.062 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.319 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.885 00:21:41.885 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.885 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.885 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.210 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.210 { 00:21:42.210 "cntlid": 133, 00:21:42.210 "qid": 0, 00:21:42.210 "state": "enabled", 00:21:42.210 "thread": "nvmf_tgt_poll_group_000", 00:21:42.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.210 "listen_address": { 00:21:42.210 "trtype": "TCP", 00:21:42.210 "adrfam": "IPv4", 00:21:42.210 "traddr": "10.0.0.2", 00:21:42.210 "trsvcid": "4420" 00:21:42.210 }, 00:21:42.210 "peer_address": { 00:21:42.210 "trtype": "TCP", 00:21:42.210 "adrfam": "IPv4", 00:21:42.210 "traddr": "10.0.0.1", 00:21:42.210 "trsvcid": "32824" 00:21:42.210 }, 00:21:42.210 "auth": { 00:21:42.210 "state": "completed", 00:21:42.210 "digest": "sha512", 00:21:42.210 "dhgroup": "ffdhe6144" 00:21:42.210 } 00:21:42.210 } 00:21:42.210 ]' 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.211 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.512 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:42.512 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.446 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.704 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.704 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.704 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.704 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.704 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.270 00:21:44.270 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.270 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.270 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.836 { 00:21:44.836 "cntlid": 135, 00:21:44.836 "qid": 0, 00:21:44.836 "state": "enabled", 00:21:44.836 "thread": "nvmf_tgt_poll_group_000", 00:21:44.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.836 "listen_address": { 00:21:44.836 "trtype": "TCP", 00:21:44.836 "adrfam": "IPv4", 00:21:44.836 "traddr": "10.0.0.2", 00:21:44.836 "trsvcid": "4420" 00:21:44.836 }, 00:21:44.836 "peer_address": { 00:21:44.836 "trtype": "TCP", 00:21:44.836 "adrfam": "IPv4", 00:21:44.836 "traddr": "10.0.0.1", 00:21:44.836 "trsvcid": "44664" 00:21:44.836 }, 00:21:44.836 "auth": { 00:21:44.836 "state": "completed", 00:21:44.836 "digest": "sha512", 00:21:44.836 "dhgroup": "ffdhe6144" 00:21:44.836 } 00:21:44.836 } 00:21:44.836 ]' 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.836 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.836 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.836 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.836 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.094 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:45.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.030 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.287 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:46.287 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.288 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.220 00:21:47.220 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.220 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.220 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.478 { 00:21:47.478 "cntlid": 137, 00:21:47.478 "qid": 0, 00:21:47.478 "state": "enabled", 00:21:47.478 "thread": "nvmf_tgt_poll_group_000", 00:21:47.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.478 "listen_address": { 00:21:47.478 "trtype": "TCP", 00:21:47.478 "adrfam": "IPv4", 00:21:47.478 "traddr": "10.0.0.2", 00:21:47.478 "trsvcid": "4420" 00:21:47.478 }, 00:21:47.478 "peer_address": { 00:21:47.478 "trtype": "TCP", 00:21:47.478 "adrfam": "IPv4", 00:21:47.478 "traddr": "10.0.0.1", 00:21:47.478 "trsvcid": "44694" 00:21:47.478 }, 00:21:47.478 "auth": { 00:21:47.478 "state": "completed", 00:21:47.478 "digest": "sha512", 00:21:47.478 "dhgroup": "ffdhe8192" 00:21:47.478 } 00:21:47.478 } 00:21:47.478 ]' 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.478 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.043 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:48.043 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.975 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.233 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.166 00:21:50.166 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.166 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.166 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.424 { 00:21:50.424 "cntlid": 139, 00:21:50.424 "qid": 0, 00:21:50.424 "state": "enabled", 00:21:50.424 "thread": "nvmf_tgt_poll_group_000", 00:21:50.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.424 "listen_address": { 00:21:50.424 "trtype": "TCP", 00:21:50.424 "adrfam": "IPv4", 00:21:50.424 "traddr": "10.0.0.2", 00:21:50.424 "trsvcid": "4420" 00:21:50.424 }, 00:21:50.424 "peer_address": { 00:21:50.424 "trtype": "TCP", 00:21:50.424 "adrfam": "IPv4", 00:21:50.424 "traddr": "10.0.0.1", 00:21:50.424 "trsvcid": "44720" 00:21:50.424 }, 00:21:50.424 "auth": { 00:21:50.424 "state": "completed", 00:21:50.424 "digest": "sha512", 00:21:50.424 "dhgroup": "ffdhe8192" 00:21:50.424 } 00:21:50.424 } 00:21:50.424 ]' 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.424 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.681 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.681 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.681 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.939 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:50.939 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: --dhchap-ctrl-secret DHHC-1:02:OTY1YzM4NDE0MjQwYzQ0NDk0OGEwMGJiYTkyZWE3ZDBlMTg4OTI5YTI3M2Q0YzVhljRemw==: 00:21:51.872 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.872 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.872 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.872 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.872 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.872 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.872 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.872 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.130 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.062 00:21:53.062 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.062 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.062 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.320 { 00:21:53.320 "cntlid": 141, 00:21:53.320 "qid": 0, 00:21:53.320 "state": "enabled", 00:21:53.320 "thread": "nvmf_tgt_poll_group_000", 00:21:53.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.320 "listen_address": { 00:21:53.320 "trtype": "TCP", 00:21:53.320 "adrfam": "IPv4", 00:21:53.320 "traddr": "10.0.0.2", 00:21:53.320 "trsvcid": "4420" 00:21:53.320 }, 00:21:53.320 "peer_address": { 00:21:53.320 "trtype": "TCP", 00:21:53.320 "adrfam": "IPv4", 00:21:53.320 "traddr": "10.0.0.1", 00:21:53.320 "trsvcid": "44746" 00:21:53.320 }, 00:21:53.320 "auth": { 00:21:53.320 "state": "completed", 00:21:53.320 "digest": "sha512", 00:21:53.320 "dhgroup": "ffdhe8192" 00:21:53.320 } 00:21:53.320 } 00:21:53.320 ]' 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.320 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.578 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:53.578 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:01:MjQ4ZDdlNGUxNzQ0ZmNlOTM5YWVmZDA4OGI2YWE5YjUvZAU5: 00:21:54.511 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.511 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.511 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.511 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.768 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.768 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.768 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.768 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.027 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.958 00:21:55.958 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.958 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.958 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.958 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.216 { 00:21:56.216 "cntlid": 143, 00:21:56.216 "qid": 0, 00:21:56.216 "state": "enabled", 00:21:56.216 "thread": "nvmf_tgt_poll_group_000", 00:21:56.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.216 "listen_address": { 00:21:56.216 "trtype": "TCP", 00:21:56.216 "adrfam": "IPv4", 00:21:56.216 "traddr": "10.0.0.2", 00:21:56.216 "trsvcid": "4420" 00:21:56.216 }, 00:21:56.216 "peer_address": { 00:21:56.216 "trtype": "TCP", 00:21:56.216 "adrfam": "IPv4", 00:21:56.216 "traddr": "10.0.0.1", 00:21:56.216 "trsvcid": "40712" 00:21:56.216 }, 00:21:56.216 "auth": { 00:21:56.216 "state": "completed", 00:21:56.216 "digest": "sha512", 00:21:56.216 "dhgroup": "ffdhe8192" 00:21:56.216 } 00:21:56.216 } 00:21:56.216 ]' 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.474 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:56.474 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.407 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.973 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.906 00:21:58.906 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.906 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.906 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.906 { 00:21:58.906 "cntlid": 145, 00:21:58.906 "qid": 0, 00:21:58.906 "state": "enabled", 00:21:58.906 "thread": "nvmf_tgt_poll_group_000", 00:21:58.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.906 "listen_address": { 00:21:58.906 "trtype": "TCP", 00:21:58.906 "adrfam": "IPv4", 00:21:58.906 "traddr": "10.0.0.2", 00:21:58.906 "trsvcid": "4420" 00:21:58.906 }, 00:21:58.906 "peer_address": { 00:21:58.906 "trtype": "TCP", 00:21:58.906 "adrfam": "IPv4", 00:21:58.906 "traddr": "10.0.0.1", 00:21:58.906 "trsvcid": "40724" 00:21:58.906 }, 00:21:58.906 "auth": { 00:21:58.906 "state": "completed", 00:21:58.906 "digest": "sha512", 00:21:58.906 "dhgroup": "ffdhe8192" 00:21:58.906 } 00:21:58.906 } 00:21:58.906 ]' 00:21:58.906 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.164 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.422 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:21:59.422 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDQ5ZWViMGVmNzhhMjRmNzVlZTM0ZTE3ZWM5YmJkNGI4OTYzZTZmZjY1MDY0NmIy3wvmsA==: --dhchap-ctrl-secret DHHC-1:03:OTFjMzlkOTU3OGM3OTRmMTkxZjJjNmE4ZGM0ZjBjNjEyZDE0YjAzN2MwNmJkZTAyNWI3ODkwNmI0N2ZkMDEyYbxAJqU=: 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:00.355 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:01.288 request: 00:22:01.288 { 00:22:01.288 "name": "nvme0", 00:22:01.288 "trtype": "tcp", 00:22:01.288 "traddr": "10.0.0.2", 00:22:01.288 "adrfam": "ipv4", 00:22:01.288 "trsvcid": "4420", 00:22:01.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.288 "prchk_reftag": false, 00:22:01.288 "prchk_guard": false, 00:22:01.288 "hdgst": false, 00:22:01.288 "ddgst": false, 00:22:01.288 "dhchap_key": "key2", 00:22:01.288 "allow_unrecognized_csi": false, 00:22:01.288 "method": "bdev_nvme_attach_controller", 00:22:01.288 "req_id": 1 00:22:01.288 } 00:22:01.288 Got JSON-RPC error response 00:22:01.288 response: 00:22:01.289 { 00:22:01.289 "code": -5, 00:22:01.289 "message": "Input/output error" 00:22:01.289 } 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.289 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.232 request: 00:22:02.232 { 00:22:02.232 "name": "nvme0", 00:22:02.232 "trtype": "tcp", 00:22:02.232 "traddr": "10.0.0.2", 00:22:02.232 "adrfam": "ipv4", 00:22:02.232 "trsvcid": "4420", 00:22:02.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.232 "prchk_reftag": false, 00:22:02.232 "prchk_guard": false, 00:22:02.232 "hdgst": false, 00:22:02.232 "ddgst": false, 00:22:02.232 "dhchap_key": "key1", 00:22:02.232 "dhchap_ctrlr_key": "ckey2", 00:22:02.232 "allow_unrecognized_csi": false, 00:22:02.232 "method": "bdev_nvme_attach_controller", 00:22:02.232 "req_id": 1 00:22:02.232 } 00:22:02.232 Got JSON-RPC error response 00:22:02.232 response: 00:22:02.232 { 00:22:02.232 "code": -5, 00:22:02.232 "message": "Input/output error" 00:22:02.232 } 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.232 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.165 request: 00:22:03.165 { 00:22:03.165 "name": "nvme0", 00:22:03.165 "trtype": "tcp", 00:22:03.165 "traddr": "10.0.0.2", 00:22:03.165 "adrfam": "ipv4", 00:22:03.165 "trsvcid": "4420", 00:22:03.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.165 "prchk_reftag": false, 00:22:03.165 "prchk_guard": false, 00:22:03.165 "hdgst": false, 00:22:03.165 "ddgst": false, 00:22:03.165 "dhchap_key": "key1", 00:22:03.165 "dhchap_ctrlr_key": "ckey1", 00:22:03.165 "allow_unrecognized_csi": false, 00:22:03.165 "method": "bdev_nvme_attach_controller", 00:22:03.165 "req_id": 1 00:22:03.165 } 00:22:03.165 Got JSON-RPC error response 00:22:03.165 response: 00:22:03.165 { 00:22:03.165 "code": -5, 00:22:03.165 "message": "Input/output error" 00:22:03.165 } 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2960991 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2960991 ']' 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2960991 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960991 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960991' 00:22:03.165 killing process with pid 2960991 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2960991 00:22:03.165 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2960991 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2984575 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2984575 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2984575 ']' 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.100 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2984575 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2984575 ']' 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.473 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 null0 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iw9 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.mYf ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mYf 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.28u 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.l84 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l84 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9sb 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Zsk ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zsk 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JQl 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.039 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.412 nvme0n1 00:22:07.412 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.412 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.412 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.670 { 00:22:07.670 "cntlid": 1, 00:22:07.670 "qid": 0, 00:22:07.670 "state": "enabled", 00:22:07.670 "thread": "nvmf_tgt_poll_group_000", 00:22:07.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.670 "listen_address": { 00:22:07.670 "trtype": "TCP", 00:22:07.670 "adrfam": "IPv4", 00:22:07.670 "traddr": "10.0.0.2", 00:22:07.670 "trsvcid": "4420" 00:22:07.670 }, 00:22:07.670 "peer_address": { 00:22:07.670 "trtype": "TCP", 00:22:07.670 "adrfam": "IPv4", 00:22:07.670 "traddr": "10.0.0.1", 00:22:07.670 "trsvcid": "36630" 00:22:07.670 }, 00:22:07.670 "auth": { 00:22:07.670 "state": "completed", 00:22:07.670 "digest": "sha512", 00:22:07.670 "dhgroup": "ffdhe8192" 00:22:07.670 } 00:22:07.670 } 00:22:07.670 ]' 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.670 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.928 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.928 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.928 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.928 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.928 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.185 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:22:08.185 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:09.117 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.375 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.632 request: 00:22:09.632 { 00:22:09.632 "name": "nvme0", 00:22:09.632 "trtype": "tcp", 00:22:09.632 "traddr": "10.0.0.2", 00:22:09.632 "adrfam": "ipv4", 00:22:09.632 "trsvcid": "4420", 00:22:09.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.632 "prchk_reftag": false, 00:22:09.632 "prchk_guard": false, 00:22:09.632 "hdgst": false, 00:22:09.632 "ddgst": false, 00:22:09.632 "dhchap_key": "key3", 00:22:09.632 "allow_unrecognized_csi": false, 00:22:09.632 "method": "bdev_nvme_attach_controller", 00:22:09.632 "req_id": 1 00:22:09.632 } 00:22:09.632 Got JSON-RPC error response 00:22:09.632 response: 00:22:09.632 { 00:22:09.632 "code": -5, 00:22:09.632 "message": "Input/output error" 00:22:09.632 } 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.632 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.889 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.455 request: 00:22:10.455 { 00:22:10.455 "name": "nvme0", 00:22:10.455 "trtype": "tcp", 00:22:10.455 "traddr": "10.0.0.2", 00:22:10.455 "adrfam": "ipv4", 00:22:10.455 "trsvcid": "4420", 00:22:10.455 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.455 "prchk_reftag": false, 00:22:10.455 "prchk_guard": false, 00:22:10.455 "hdgst": false, 00:22:10.455 "ddgst": false, 00:22:10.455 "dhchap_key": "key3", 00:22:10.455 "allow_unrecognized_csi": false, 00:22:10.455 "method": "bdev_nvme_attach_controller", 00:22:10.455 "req_id": 1 00:22:10.455 } 00:22:10.455 Got JSON-RPC error response 00:22:10.455 response: 00:22:10.455 { 00:22:10.455 "code": -5, 00:22:10.455 "message": "Input/output error" 00:22:10.455 } 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.455 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.456 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.456 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.456 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.713 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.713 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.713 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.714 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.278 request: 00:22:11.278 { 00:22:11.278 "name": "nvme0", 00:22:11.278 "trtype": "tcp", 00:22:11.279 "traddr": "10.0.0.2", 00:22:11.279 "adrfam": "ipv4", 00:22:11.279 "trsvcid": "4420", 00:22:11.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.279 "prchk_reftag": false, 00:22:11.279 "prchk_guard": false, 00:22:11.279 "hdgst": false, 00:22:11.279 "ddgst": false, 00:22:11.279 "dhchap_key": "key0", 00:22:11.279 "dhchap_ctrlr_key": "key1", 00:22:11.279 "allow_unrecognized_csi": false, 00:22:11.279 "method": "bdev_nvme_attach_controller", 00:22:11.279 "req_id": 1 00:22:11.279 } 00:22:11.279 Got JSON-RPC error response 00:22:11.279 response: 00:22:11.279 { 00:22:11.279 "code": -5, 00:22:11.279 "message": "Input/output error" 00:22:11.279 } 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:11.279 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:11.535 nvme0n1 00:22:11.535 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:11.535 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:11.535 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.793 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.793 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.793 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:12.051 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.988 nvme0n1 00:22:13.988 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:13.988 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:13.988 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:13.988 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.271 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.271 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:22:14.271 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: --dhchap-ctrl-secret DHHC-1:03:ZDE3MjRkMjE4MDU4NWZjZDg0YTJkNmE1ZWVhOTc2ZmRlYzdkNDVlYTFkZmM2NDQ2NjFmNDc1ZjZhMjU5MzcwN3fOhng=: 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.206 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.464 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:15.465 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.465 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.399 request: 00:22:16.399 { 00:22:16.399 "name": "nvme0", 00:22:16.399 "trtype": "tcp", 00:22:16.399 "traddr": "10.0.0.2", 00:22:16.399 "adrfam": "ipv4", 00:22:16.399 "trsvcid": "4420", 00:22:16.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.399 "prchk_reftag": false, 00:22:16.399 "prchk_guard": false, 00:22:16.399 "hdgst": false, 00:22:16.399 "ddgst": false, 00:22:16.399 "dhchap_key": "key1", 00:22:16.399 "allow_unrecognized_csi": false, 00:22:16.399 "method": "bdev_nvme_attach_controller", 00:22:16.399 "req_id": 1 00:22:16.399 } 00:22:16.399 Got JSON-RPC error response 00:22:16.399 response: 00:22:16.399 { 00:22:16.399 "code": -5, 00:22:16.399 "message": "Input/output error" 00:22:16.399 } 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.399 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.772 nvme0n1 00:22:17.772 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:17.772 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:17.772 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.030 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.030 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.030 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:18.288 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:18.854 nvme0n1 00:22:18.854 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:18.854 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:18.854 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.111 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.111 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.111 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: '' 2s 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: ]] 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGYyY2ZmM2MwZTFjYTU4ZWU4ZmE1ZmYwYjJlMDRlNDRTOsbX: 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:19.370 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: 2s 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: ]] 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2QzOWY1MzU1ZjhlNjIyMTY5MzI5NzhlZGYzMWZiYzE3YTg0YjE4NmZiMzk0YzhjMCGUcQ==: 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:21.268 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.797 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.798 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.176 nvme0n1 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.176 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:26.109 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:26.367 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:26.367 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:26.367 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.933 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:27.498 request: 00:22:27.498 { 00:22:27.498 "name": "nvme0", 00:22:27.498 "dhchap_key": "key1", 00:22:27.498 "dhchap_ctrlr_key": "key3", 00:22:27.498 "method": "bdev_nvme_set_keys", 00:22:27.498 "req_id": 1 00:22:27.498 } 00:22:27.498 Got JSON-RPC error response 00:22:27.498 response: 00:22:27.498 { 00:22:27.498 "code": -13, 00:22:27.498 "message": "Permission denied" 00:22:27.498 } 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.756 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.014 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:28.014 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:28.948 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:28.948 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.948 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.205 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:29.205 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:30.140 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.140 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.140 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.398 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.296 nvme0n1 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.296 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.861 request: 00:22:32.861 { 00:22:32.861 "name": "nvme0", 00:22:32.861 "dhchap_key": "key2", 00:22:32.861 "dhchap_ctrlr_key": "key0", 00:22:32.861 "method": "bdev_nvme_set_keys", 00:22:32.861 "req_id": 1 00:22:32.861 } 00:22:32.861 Got JSON-RPC error response 00:22:32.861 response: 00:22:32.861 { 00:22:32.861 "code": -13, 00:22:32.861 "message": "Permission denied" 00:22:32.861 } 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:32.861 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.426 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:33.426 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:34.359 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:34.359 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:34.359 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2961143 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2961143 ']' 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2961143 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2961143 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2961143' 00:22:34.617 killing process with pid 2961143 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2961143 00:22:34.617 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2961143 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.147 rmmod nvme_tcp 00:22:37.147 rmmod nvme_fabrics 00:22:37.147 rmmod nvme_keyring 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2984575 ']' 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2984575 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2984575 ']' 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2984575 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2984575 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2984575' 00:22:37.147 killing process with pid 2984575 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2984575 00:22:37.147 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2984575 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.082 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iw9 /tmp/spdk.key-sha256.28u /tmp/spdk.key-sha384.9sb /tmp/spdk.key-sha512.JQl /tmp/spdk.key-sha512.mYf /tmp/spdk.key-sha384.l84 /tmp/spdk.key-sha256.Zsk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:40.611 00:22:40.611 real 3m46.123s 00:22:40.611 user 8m44.127s 00:22:40.611 sys 0m27.534s 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.611 ************************************ 00:22:40.611 END TEST nvmf_auth_target 00:22:40.611 ************************************ 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.611 ************************************ 00:22:40.611 START TEST nvmf_bdevio_no_huge 00:22:40.611 ************************************ 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.611 * Looking for test storage... 00:22:40.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:40.611 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.612 --rc genhtml_branch_coverage=1 00:22:40.612 --rc genhtml_function_coverage=1 00:22:40.612 --rc genhtml_legend=1 00:22:40.612 --rc geninfo_all_blocks=1 00:22:40.612 --rc geninfo_unexecuted_blocks=1 00:22:40.612 00:22:40.612 ' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.612 --rc genhtml_branch_coverage=1 00:22:40.612 --rc genhtml_function_coverage=1 00:22:40.612 --rc genhtml_legend=1 00:22:40.612 --rc geninfo_all_blocks=1 00:22:40.612 --rc geninfo_unexecuted_blocks=1 00:22:40.612 00:22:40.612 ' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.612 --rc genhtml_branch_coverage=1 00:22:40.612 --rc genhtml_function_coverage=1 00:22:40.612 --rc genhtml_legend=1 00:22:40.612 --rc geninfo_all_blocks=1 00:22:40.612 --rc geninfo_unexecuted_blocks=1 00:22:40.612 00:22:40.612 ' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.612 --rc genhtml_branch_coverage=1 00:22:40.612 --rc genhtml_function_coverage=1 00:22:40.612 --rc genhtml_legend=1 00:22:40.612 --rc geninfo_all_blocks=1 00:22:40.612 --rc geninfo_unexecuted_blocks=1 00:22:40.612 00:22:40.612 ' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.612 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.613 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.516 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:42.517 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:42.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:42.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:42.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:22:42.517 00:22:42.517 --- 10.0.0.2 ping statistics --- 00:22:42.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.517 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:42.517 00:22:42.517 --- 10.0.0.1 ping statistics --- 00:22:42.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.517 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2990610 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2990610 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2990610 ']' 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.518 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.518 [2024-11-18 18:29:40.796828] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:42.518 [2024-11-18 18:29:40.797010] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:42.777 [2024-11-18 18:29:40.961904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.777 [2024-11-18 18:29:41.095208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.777 [2024-11-18 18:29:41.095298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.777 [2024-11-18 18:29:41.095321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.777 [2024-11-18 18:29:41.095342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.777 [2024-11-18 18:29:41.095360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.777 [2024-11-18 18:29:41.097251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.777 [2024-11-18 18:29:41.097314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:42.777 [2024-11-18 18:29:41.097357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.777 [2024-11-18 18:29:41.097379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 [2024-11-18 18:29:41.810349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 Malloc0 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.712 [2024-11-18 18:29:41.900210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.712 { 00:22:43.712 "params": { 00:22:43.712 "name": "Nvme$subsystem", 00:22:43.712 "trtype": "$TEST_TRANSPORT", 00:22:43.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.712 "adrfam": "ipv4", 00:22:43.712 "trsvcid": "$NVMF_PORT", 00:22:43.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.712 "hdgst": ${hdgst:-false}, 00:22:43.712 "ddgst": ${ddgst:-false} 00:22:43.712 }, 00:22:43.712 "method": "bdev_nvme_attach_controller" 00:22:43.712 } 00:22:43.712 EOF 00:22:43.712 )") 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:43.712 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.712 "params": { 00:22:43.712 "name": "Nvme1", 00:22:43.712 "trtype": "tcp", 00:22:43.712 "traddr": "10.0.0.2", 00:22:43.712 "adrfam": "ipv4", 00:22:43.712 "trsvcid": "4420", 00:22:43.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.712 "hdgst": false, 00:22:43.712 "ddgst": false 00:22:43.712 }, 00:22:43.712 "method": "bdev_nvme_attach_controller" 00:22:43.712 }' 00:22:43.712 [2024-11-18 18:29:41.986251] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:43.712 [2024-11-18 18:29:41.986389] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2990767 ] 00:22:43.971 [2024-11-18 18:29:42.143119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.971 [2024-11-18 18:29:42.287215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.971 [2024-11-18 18:29:42.287266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.971 [2024-11-18 18:29:42.287257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.904 I/O targets: 00:22:44.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:44.904 00:22:44.904 00:22:44.904 CUnit - A unit testing framework for C - Version 2.1-3 00:22:44.904 http://cunit.sourceforge.net/ 00:22:44.904 00:22:44.904 00:22:44.904 Suite: bdevio tests on: Nvme1n1 00:22:44.904 Test: blockdev write read block ...passed 00:22:44.904 Test: blockdev write zeroes read block ...passed 00:22:44.904 Test: blockdev write zeroes read no split ...passed 00:22:44.904 Test: blockdev write zeroes read split ...passed 00:22:44.904 Test: blockdev write zeroes read split partial ...passed 00:22:44.904 Test: blockdev reset ...[2024-11-18 18:29:43.066404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:44.904 [2024-11-18 18:29:43.066604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:44.904 [2024-11-18 18:29:43.084450] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:44.904 passed 00:22:44.904 Test: blockdev write read 8 blocks ...passed 00:22:44.904 Test: blockdev write read size > 128k ...passed 00:22:44.904 Test: blockdev write read invalid size ...passed 00:22:44.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:44.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:44.904 Test: blockdev write read max offset ...passed 00:22:44.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:44.904 Test: blockdev writev readv 8 blocks ...passed 00:22:44.904 Test: blockdev writev readv 30 x 1block ...passed 00:22:45.162 Test: blockdev writev readv block ...passed 00:22:45.162 Test: blockdev writev readv size > 128k ...passed 00:22:45.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:45.162 Test: blockdev comparev and writev ...[2024-11-18 18:29:43.258937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.259009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.259049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.259076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.259564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.259598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.259642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.259668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.260105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.260138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.260177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.260209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.260659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.260732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:45.162 [2024-11-18 18:29:43.260758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:45.162 passed 00:22:45.162 Test: blockdev nvme passthru rw ...passed 00:22:45.162 Test: blockdev nvme passthru vendor specific ...[2024-11-18 18:29:43.344028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.162 [2024-11-18 18:29:43.344090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.344330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.162 [2024-11-18 18:29:43.344363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.344564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.162 [2024-11-18 18:29:43.344601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:45.162 [2024-11-18 18:29:43.344808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.162 [2024-11-18 18:29:43.344841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:45.162 passed 00:22:45.162 Test: blockdev nvme admin passthru ...passed 00:22:45.162 Test: blockdev copy ...passed 00:22:45.162 00:22:45.162 Run Summary: Type Total Ran Passed Failed Inactive 00:22:45.162 suites 1 1 n/a 0 0 00:22:45.162 tests 23 23 23 0 0 00:22:45.162 asserts 152 152 152 0 n/a 00:22:45.162 00:22:45.162 Elapsed time = 1.054 seconds 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.096 rmmod nvme_tcp 00:22:46.096 rmmod nvme_fabrics 00:22:46.096 rmmod nvme_keyring 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2990610 ']' 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2990610 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2990610 ']' 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2990610 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990610 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990610' 00:22:46.096 killing process with pid 2990610 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2990610 00:22:46.096 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2990610 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.663 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.922 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.922 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.922 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.922 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.922 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.822 00:22:48.822 real 0m8.661s 00:22:48.822 user 0m19.951s 00:22:48.822 sys 0m2.924s 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.822 ************************************ 00:22:48.822 END TEST nvmf_bdevio_no_huge 00:22:48.822 ************************************ 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:48.822 ************************************ 00:22:48.822 START TEST nvmf_tls 00:22:48.822 ************************************ 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:48.822 * Looking for test storage... 00:22:48.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:48.822 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.080 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.081 --rc genhtml_branch_coverage=1 00:22:49.081 --rc genhtml_function_coverage=1 00:22:49.081 --rc genhtml_legend=1 00:22:49.081 --rc geninfo_all_blocks=1 00:22:49.081 --rc geninfo_unexecuted_blocks=1 00:22:49.081 00:22:49.081 ' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.081 --rc genhtml_branch_coverage=1 00:22:49.081 --rc genhtml_function_coverage=1 00:22:49.081 --rc genhtml_legend=1 00:22:49.081 --rc geninfo_all_blocks=1 00:22:49.081 --rc geninfo_unexecuted_blocks=1 00:22:49.081 00:22:49.081 ' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.081 --rc genhtml_branch_coverage=1 00:22:49.081 --rc genhtml_function_coverage=1 00:22:49.081 --rc genhtml_legend=1 00:22:49.081 --rc geninfo_all_blocks=1 00:22:49.081 --rc geninfo_unexecuted_blocks=1 00:22:49.081 00:22:49.081 ' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.081 --rc genhtml_branch_coverage=1 00:22:49.081 --rc genhtml_function_coverage=1 00:22:49.081 --rc genhtml_legend=1 00:22:49.081 --rc geninfo_all_blocks=1 00:22:49.081 --rc geninfo_unexecuted_blocks=1 00:22:49.081 00:22:49.081 ' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.081 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.054 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:22:51.315 00:22:51.315 --- 10.0.0.2 ping statistics --- 00:22:51.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.315 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:51.315 00:22:51.315 --- 10.0.0.1 ping statistics --- 00:22:51.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.315 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2992988 00:22:51.315 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2992988 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2992988 ']' 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.316 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.316 [2024-11-18 18:29:49.602740] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:51.316 [2024-11-18 18:29:49.602869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.574 [2024-11-18 18:29:49.758474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.574 [2024-11-18 18:29:49.892170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.574 [2024-11-18 18:29:49.892252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.574 [2024-11-18 18:29:49.892277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.574 [2024-11-18 18:29:49.892302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.574 [2024-11-18 18:29:49.892322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.574 [2024-11-18 18:29:49.893936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:52.508 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:52.508 true 00:22:52.766 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.766 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:53.024 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:53.024 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:53.024 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.282 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.282 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:53.540 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:53.540 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:53.540 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:53.797 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.797 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:54.054 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:54.054 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:54.054 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.054 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:54.311 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:54.311 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:54.311 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:54.569 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.569 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:54.827 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:54.827 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:54.827 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:55.392 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QvcQ6MksAo 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.r2wH3ws8uW 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QvcQ6MksAo 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.r2wH3ws8uW 00:22:55.649 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.907 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:56.473 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QvcQ6MksAo 00:22:56.473 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QvcQ6MksAo 00:22:56.473 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.038 [2024-11-18 18:29:55.069583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.038 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:57.038 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:57.296 [2024-11-18 18:29:55.611304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.296 [2024-11-18 18:29:55.611706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.296 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.861 malloc0 00:22:57.861 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:58.119 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QvcQ6MksAo 00:22:58.377 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.636 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QvcQ6MksAo 00:23:08.603 Initializing NVMe Controllers 00:23:08.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.603 Initialization complete. Launching workers. 00:23:08.603 ======================================================== 00:23:08.603 Latency(us) 00:23:08.603 Device Information : IOPS MiB/s Average min max 00:23:08.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5523.29 21.58 11592.34 2304.38 13864.62 00:23:08.603 ======================================================== 00:23:08.603 Total : 5523.29 21.58 11592.34 2304.38 13864.62 00:23:08.603 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvcQ6MksAo 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QvcQ6MksAo 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2995250 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2995250 /var/tmp/bdevperf.sock 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995250 ']' 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.861 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.861 [2024-11-18 18:30:07.089144] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:08.861 [2024-11-18 18:30:07.089302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995250 ] 00:23:09.120 [2024-11-18 18:30:07.241266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.120 [2024-11-18 18:30:07.362985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.052 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.052 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.052 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QvcQ6MksAo 00:23:10.052 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.311 [2024-11-18 18:30:08.605453] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.568 TLSTESTn1 00:23:10.568 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:10.568 Running I/O for 10 seconds... 00:23:12.877 2686.00 IOPS, 10.49 MiB/s [2024-11-18T17:30:12.147Z] 2728.00 IOPS, 10.66 MiB/s [2024-11-18T17:30:13.080Z] 2735.00 IOPS, 10.68 MiB/s [2024-11-18T17:30:14.013Z] 2745.25 IOPS, 10.72 MiB/s [2024-11-18T17:30:14.946Z] 2758.80 IOPS, 10.78 MiB/s [2024-11-18T17:30:15.877Z] 2763.33 IOPS, 10.79 MiB/s [2024-11-18T17:30:17.249Z] 2767.57 IOPS, 10.81 MiB/s [2024-11-18T17:30:18.183Z] 2768.75 IOPS, 10.82 MiB/s [2024-11-18T17:30:19.185Z] 2764.78 IOPS, 10.80 MiB/s [2024-11-18T17:30:19.185Z] 2767.30 IOPS, 10.81 MiB/s 00:23:20.848 Latency(us) 00:23:20.848 [2024-11-18T17:30:19.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.848 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.848 Verification LBA range: start 0x0 length 0x2000 00:23:20.848 TLSTESTn1 : 10.04 2770.31 10.82 0.00 0.00 46111.59 11990.66 52428.80 00:23:20.848 [2024-11-18T17:30:19.185Z] =================================================================================================================== 00:23:20.848 [2024-11-18T17:30:19.185Z] Total : 2770.31 10.82 0.00 0.00 46111.59 11990.66 52428.80 00:23:20.848 { 00:23:20.848 "results": [ 00:23:20.848 { 00:23:20.848 "job": "TLSTESTn1", 00:23:20.848 "core_mask": "0x4", 00:23:20.848 "workload": "verify", 00:23:20.848 "status": "finished", 00:23:20.848 "verify_range": { 00:23:20.848 "start": 0, 00:23:20.848 "length": 8192 00:23:20.848 }, 00:23:20.848 "queue_depth": 128, 00:23:20.848 "io_size": 4096, 00:23:20.848 "runtime": 10.035334, 00:23:20.848 "iops": 2770.3113817636763, 00:23:20.848 "mibps": 10.82152883501436, 00:23:20.848 "io_failed": 0, 00:23:20.848 "io_timeout": 0, 00:23:20.848 "avg_latency_us": 46111.59083251735, 00:23:20.848 "min_latency_us": 11990.660740740741, 00:23:20.848 "max_latency_us": 52428.8 00:23:20.848 } 00:23:20.848 ], 00:23:20.848 "core_count": 1 00:23:20.848 } 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2995250 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995250 ']' 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995250 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995250 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995250' 00:23:20.848 killing process with pid 2995250 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995250 00:23:20.848 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.848 00:23:20.848 Latency(us) 00:23:20.848 [2024-11-18T17:30:19.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.848 [2024-11-18T17:30:19.185Z] =================================================================================================================== 00:23:20.848 [2024-11-18T17:30:19.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.848 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995250 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r2wH3ws8uW 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r2wH3ws8uW 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r2wH3ws8uW 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.r2wH3ws8uW 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997214 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997214 /var/tmp/bdevperf.sock 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997214 ']' 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.438 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.696 [2024-11-18 18:30:19.813234] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:21.696 [2024-11-18 18:30:19.813370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997214 ] 00:23:21.696 [2024-11-18 18:30:19.946643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.954 [2024-11-18 18:30:20.078626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.520 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.520 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.520 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.r2wH3ws8uW 00:23:22.820 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.077 [2024-11-18 18:30:21.387656] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.077 [2024-11-18 18:30:21.397996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:23.077 [2024-11-18 18:30:21.398878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:23.077 [2024-11-18 18:30:21.399854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:23.077 [2024-11-18 18:30:21.400848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:23.077 [2024-11-18 18:30:21.400897] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:23.077 [2024-11-18 18:30:21.400921] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:23.077 [2024-11-18 18:30:21.400974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:23.077 request: 00:23:23.077 { 00:23:23.077 "name": "TLSTEST", 00:23:23.077 "trtype": "tcp", 00:23:23.077 "traddr": "10.0.0.2", 00:23:23.077 "adrfam": "ipv4", 00:23:23.077 "trsvcid": "4420", 00:23:23.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.077 "prchk_reftag": false, 00:23:23.077 "prchk_guard": false, 00:23:23.077 "hdgst": false, 00:23:23.077 "ddgst": false, 00:23:23.077 "psk": "key0", 00:23:23.077 "allow_unrecognized_csi": false, 00:23:23.077 "method": "bdev_nvme_attach_controller", 00:23:23.077 "req_id": 1 00:23:23.077 } 00:23:23.077 Got JSON-RPC error response 00:23:23.077 response: 00:23:23.077 { 00:23:23.077 "code": -5, 00:23:23.077 "message": "Input/output error" 00:23:23.077 } 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2997214 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997214 ']' 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997214 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997214 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997214' 00:23:23.336 killing process with pid 2997214 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997214 00:23:23.336 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.336 00:23:23.336 Latency(us) 00:23:23.336 [2024-11-18T17:30:21.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.336 [2024-11-18T17:30:21.673Z] =================================================================================================================== 00:23:23.336 [2024-11-18T17:30:21.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.336 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997214 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvcQ6MksAo 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvcQ6MksAo 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.269 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QvcQ6MksAo 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QvcQ6MksAo 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997495 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997495 /var/tmp/bdevperf.sock 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997495 ']' 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.270 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.270 [2024-11-18 18:30:22.329443] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:24.270 [2024-11-18 18:30:22.329576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997495 ] 00:23:24.270 [2024-11-18 18:30:22.462804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.270 [2024-11-18 18:30:22.584672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.203 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.203 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.203 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QvcQ6MksAo 00:23:25.460 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:25.719 [2024-11-18 18:30:23.813642] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.719 [2024-11-18 18:30:23.823245] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:25.719 [2024-11-18 18:30:23.823289] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:25.719 [2024-11-18 18:30:23.823365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:25.719 [2024-11-18 18:30:23.823399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:25.719 [2024-11-18 18:30:23.824366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:25.719 [2024-11-18 18:30:23.825367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:25.719 [2024-11-18 18:30:23.825402] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:25.719 [2024-11-18 18:30:23.825443] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:25.719 [2024-11-18 18:30:23.825479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:25.719 request: 00:23:25.719 { 00:23:25.719 "name": "TLSTEST", 00:23:25.719 "trtype": "tcp", 00:23:25.719 "traddr": "10.0.0.2", 00:23:25.719 "adrfam": "ipv4", 00:23:25.719 "trsvcid": "4420", 00:23:25.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.719 "prchk_reftag": false, 00:23:25.719 "prchk_guard": false, 00:23:25.719 "hdgst": false, 00:23:25.719 "ddgst": false, 00:23:25.719 "psk": "key0", 00:23:25.719 "allow_unrecognized_csi": false, 00:23:25.719 "method": "bdev_nvme_attach_controller", 00:23:25.719 "req_id": 1 00:23:25.719 } 00:23:25.719 Got JSON-RPC error response 00:23:25.719 response: 00:23:25.719 { 00:23:25.719 "code": -5, 00:23:25.719 "message": "Input/output error" 00:23:25.719 } 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2997495 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997495 ']' 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997495 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997495 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997495' 00:23:25.719 killing process with pid 2997495 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997495 00:23:25.719 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.719 00:23:25.719 Latency(us) 00:23:25.719 [2024-11-18T17:30:24.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.719 [2024-11-18T17:30:24.056Z] =================================================================================================================== 00:23:25.719 [2024-11-18T17:30:24.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.719 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997495 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvcQ6MksAo 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvcQ6MksAo 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QvcQ6MksAo 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QvcQ6MksAo 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997774 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997774 /var/tmp/bdevperf.sock 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997774 ']' 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.653 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.653 [2024-11-18 18:30:24.784312] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:26.654 [2024-11-18 18:30:24.784449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997774 ] 00:23:26.654 [2024-11-18 18:30:24.922556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.912 [2024-11-18 18:30:25.043087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.477 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.477 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.477 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QvcQ6MksAo 00:23:27.735 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.993 [2024-11-18 18:30:26.292933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.993 [2024-11-18 18:30:26.307136] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:27.993 [2024-11-18 18:30:26.307173] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:27.993 [2024-11-18 18:30:26.307240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:27.993 [2024-11-18 18:30:26.307650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:27.993 [2024-11-18 18:30:26.308625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:27.993 [2024-11-18 18:30:26.309617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:27.993 [2024-11-18 18:30:26.309672] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:27.993 [2024-11-18 18:30:26.309697] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:27.993 [2024-11-18 18:30:26.309727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:27.993 request: 00:23:27.993 { 00:23:27.993 "name": "TLSTEST", 00:23:27.993 "trtype": "tcp", 00:23:27.993 "traddr": "10.0.0.2", 00:23:27.993 "adrfam": "ipv4", 00:23:27.993 "trsvcid": "4420", 00:23:27.993 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:27.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.993 "prchk_reftag": false, 00:23:27.993 "prchk_guard": false, 00:23:27.993 "hdgst": false, 00:23:27.993 "ddgst": false, 00:23:27.993 "psk": "key0", 00:23:27.993 "allow_unrecognized_csi": false, 00:23:27.993 "method": "bdev_nvme_attach_controller", 00:23:27.993 "req_id": 1 00:23:27.993 } 00:23:27.993 Got JSON-RPC error response 00:23:27.993 response: 00:23:27.993 { 00:23:27.993 "code": -5, 00:23:27.993 "message": "Input/output error" 00:23:27.993 } 00:23:27.993 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2997774 00:23:27.993 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997774 ']' 00:23:27.993 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997774 00:23:27.993 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997774 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997774' 00:23:28.251 killing process with pid 2997774 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997774 00:23:28.251 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.251 00:23:28.251 Latency(us) 00:23:28.251 [2024-11-18T17:30:26.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.251 [2024-11-18T17:30:26.588Z] =================================================================================================================== 00:23:28.251 [2024-11-18T17:30:26.588Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.251 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997774 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998163 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998163 /var/tmp/bdevperf.sock 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998163 ']' 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.818 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.077 [2024-11-18 18:30:27.220501] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:29.077 [2024-11-18 18:30:27.220649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998163 ] 00:23:29.077 [2024-11-18 18:30:27.354027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.335 [2024-11-18 18:30:27.478126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.900 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.900 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.900 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:30.159 [2024-11-18 18:30:28.439543] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:30.159 [2024-11-18 18:30:28.439627] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:30.159 request: 00:23:30.159 { 00:23:30.159 "name": "key0", 00:23:30.159 "path": "", 00:23:30.159 "method": "keyring_file_add_key", 00:23:30.159 "req_id": 1 00:23:30.159 } 00:23:30.159 Got JSON-RPC error response 00:23:30.159 response: 00:23:30.159 { 00:23:30.159 "code": -1, 00:23:30.159 "message": "Operation not permitted" 00:23:30.159 } 00:23:30.159 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.726 [2024-11-18 18:30:28.764519] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.726 [2024-11-18 18:30:28.764597] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:30.726 request: 00:23:30.726 { 00:23:30.726 "name": "TLSTEST", 00:23:30.726 "trtype": "tcp", 00:23:30.726 "traddr": "10.0.0.2", 00:23:30.726 "adrfam": "ipv4", 00:23:30.726 "trsvcid": "4420", 00:23:30.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.726 "prchk_reftag": false, 00:23:30.726 "prchk_guard": false, 00:23:30.726 "hdgst": false, 00:23:30.726 "ddgst": false, 00:23:30.726 "psk": "key0", 00:23:30.726 "allow_unrecognized_csi": false, 00:23:30.726 "method": "bdev_nvme_attach_controller", 00:23:30.726 "req_id": 1 00:23:30.726 } 00:23:30.726 Got JSON-RPC error response 00:23:30.726 response: 00:23:30.726 { 00:23:30.726 "code": -126, 00:23:30.726 "message": "Required key not available" 00:23:30.726 } 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998163 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998163 ']' 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998163 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998163 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998163' 00:23:30.726 killing process with pid 2998163 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998163 00:23:30.726 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.726 00:23:30.726 Latency(us) 00:23:30.726 [2024-11-18T17:30:29.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.726 [2024-11-18T17:30:29.063Z] =================================================================================================================== 00:23:30.726 [2024-11-18T17:30:29.063Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.726 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998163 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2992988 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2992988 ']' 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2992988 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992988 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.660 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.661 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992988' 00:23:31.661 killing process with pid 2992988 00:23:31.661 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2992988 00:23:31.661 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2992988 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.0YiorwwFtI 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.0YiorwwFtI 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2998584 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2998584 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998584 ']' 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.034 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.034 [2024-11-18 18:30:31.081691] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:33.034 [2024-11-18 18:30:31.081825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.034 [2024-11-18 18:30:31.234500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.292 [2024-11-18 18:30:31.373697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.292 [2024-11-18 18:30:31.373774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.292 [2024-11-18 18:30:31.373799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.292 [2024-11-18 18:30:31.373825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.292 [2024-11-18 18:30:31.373844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.292 [2024-11-18 18:30:31.375457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0YiorwwFtI 00:23:33.858 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.116 [2024-11-18 18:30:32.410465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.116 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.373 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.631 [2024-11-18 18:30:32.955749] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.631 [2024-11-18 18:30:32.956099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.888 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.146 malloc0 00:23:35.146 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.404 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:23:35.662 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YiorwwFtI 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0YiorwwFtI 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999002 00:23:35.919 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999002 /var/tmp/bdevperf.sock 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999002 ']' 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.920 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.920 [2024-11-18 18:30:34.146751] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:35.920 [2024-11-18 18:30:34.146894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999002 ] 00:23:36.178 [2024-11-18 18:30:34.278000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.178 [2024-11-18 18:30:34.395869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.110 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.110 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.110 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:23:37.110 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.368 [2024-11-18 18:30:35.625695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.625 TLSTESTn1 00:23:37.625 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:37.625 Running I/O for 10 seconds... 00:23:39.931 2646.00 IOPS, 10.34 MiB/s [2024-11-18T17:30:39.200Z] 2676.50 IOPS, 10.46 MiB/s [2024-11-18T17:30:40.133Z] 2681.00 IOPS, 10.47 MiB/s [2024-11-18T17:30:41.067Z] 2691.50 IOPS, 10.51 MiB/s [2024-11-18T17:30:42.001Z] 2699.40 IOPS, 10.54 MiB/s [2024-11-18T17:30:42.934Z] 2702.83 IOPS, 10.56 MiB/s [2024-11-18T17:30:43.867Z] 2703.86 IOPS, 10.56 MiB/s [2024-11-18T17:30:45.239Z] 2703.62 IOPS, 10.56 MiB/s [2024-11-18T17:30:46.208Z] 2703.56 IOPS, 10.56 MiB/s [2024-11-18T17:30:46.208Z] 2699.40 IOPS, 10.54 MiB/s 00:23:47.871 Latency(us) 00:23:47.871 [2024-11-18T17:30:46.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.871 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:47.871 Verification LBA range: start 0x0 length 0x2000 00:23:47.871 TLSTESTn1 : 10.04 2701.93 10.55 0.00 0.00 47265.49 8980.86 33399.09 00:23:47.871 [2024-11-18T17:30:46.208Z] =================================================================================================================== 00:23:47.871 [2024-11-18T17:30:46.208Z] Total : 2701.93 10.55 0.00 0.00 47265.49 8980.86 33399.09 00:23:47.871 { 00:23:47.871 "results": [ 00:23:47.871 { 00:23:47.871 "job": "TLSTESTn1", 00:23:47.871 "core_mask": "0x4", 00:23:47.871 "workload": "verify", 00:23:47.871 "status": "finished", 00:23:47.871 "verify_range": { 00:23:47.871 "start": 0, 00:23:47.871 "length": 8192 00:23:47.871 }, 00:23:47.871 "queue_depth": 128, 00:23:47.871 "io_size": 4096, 00:23:47.871 "runtime": 10.038015, 00:23:47.871 "iops": 2701.9286183573145, 00:23:47.871 "mibps": 10.55440866545826, 00:23:47.871 "io_failed": 0, 00:23:47.871 "io_timeout": 0, 00:23:47.871 "avg_latency_us": 47265.489213457986, 00:23:47.871 "min_latency_us": 8980.85925925926, 00:23:47.871 "max_latency_us": 33399.08740740741 00:23:47.871 } 00:23:47.871 ], 00:23:47.871 "core_count": 1 00:23:47.871 } 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2999002 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999002 ']' 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999002 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999002 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999002' 00:23:47.871 killing process with pid 2999002 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999002 00:23:47.871 Received shutdown signal, test time was about 10.000000 seconds 00:23:47.871 00:23:47.871 Latency(us) 00:23:47.871 [2024-11-18T17:30:46.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.871 [2024-11-18T17:30:46.208Z] =================================================================================================================== 00:23:47.871 [2024-11-18T17:30:46.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.871 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999002 00:23:48.830 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.0YiorwwFtI 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YiorwwFtI 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YiorwwFtI 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0YiorwwFtI 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0YiorwwFtI 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3000455 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3000455 /var/tmp/bdevperf.sock 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000455 ']' 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.831 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.831 [2024-11-18 18:30:46.899181] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:48.831 [2024-11-18 18:30:46.899314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000455 ] 00:23:48.831 [2024-11-18 18:30:47.032570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.831 [2024-11-18 18:30:47.150500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.765 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.765 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.765 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:23:50.023 [2024-11-18 18:30:48.188676] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0YiorwwFtI': 0100666 00:23:50.023 [2024-11-18 18:30:48.188727] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:50.023 request: 00:23:50.023 { 00:23:50.023 "name": "key0", 00:23:50.023 "path": "/tmp/tmp.0YiorwwFtI", 00:23:50.023 "method": "keyring_file_add_key", 00:23:50.023 "req_id": 1 00:23:50.023 } 00:23:50.023 Got JSON-RPC error response 00:23:50.023 response: 00:23:50.023 { 00:23:50.023 "code": -1, 00:23:50.023 "message": "Operation not permitted" 00:23:50.023 } 00:23:50.023 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.281 [2024-11-18 18:30:48.473630] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.281 [2024-11-18 18:30:48.473725] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:50.281 request: 00:23:50.281 { 00:23:50.281 "name": "TLSTEST", 00:23:50.281 "trtype": "tcp", 00:23:50.281 "traddr": "10.0.0.2", 00:23:50.281 "adrfam": "ipv4", 00:23:50.281 "trsvcid": "4420", 00:23:50.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.281 "prchk_reftag": false, 00:23:50.281 "prchk_guard": false, 00:23:50.281 "hdgst": false, 00:23:50.281 "ddgst": false, 00:23:50.281 "psk": "key0", 00:23:50.281 "allow_unrecognized_csi": false, 00:23:50.281 "method": "bdev_nvme_attach_controller", 00:23:50.281 "req_id": 1 00:23:50.281 } 00:23:50.281 Got JSON-RPC error response 00:23:50.281 response: 00:23:50.281 { 00:23:50.281 "code": -126, 00:23:50.281 "message": "Required key not available" 00:23:50.281 } 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3000455 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000455 ']' 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000455 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000455 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000455' 00:23:50.281 killing process with pid 3000455 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000455 00:23:50.281 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.281 00:23:50.281 Latency(us) 00:23:50.281 [2024-11-18T17:30:48.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.281 [2024-11-18T17:30:48.618Z] =================================================================================================================== 00:23:50.281 [2024-11-18T17:30:48.618Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.281 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000455 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2998584 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998584 ']' 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998584 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998584 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998584' 00:23:51.214 killing process with pid 2998584 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998584 00:23:51.214 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998584 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3000873 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3000873 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000873 ']' 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.589 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.589 [2024-11-18 18:30:50.750616] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:52.589 [2024-11-18 18:30:50.750768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.589 [2024-11-18 18:30:50.904334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.847 [2024-11-18 18:30:51.043140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.847 [2024-11-18 18:30:51.043224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.847 [2024-11-18 18:30:51.043249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.847 [2024-11-18 18:30:51.043274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.847 [2024-11-18 18:30:51.043304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.847 [2024-11-18 18:30:51.044881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.415 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.415 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.415 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.415 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.415 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0YiorwwFtI 00:23:53.673 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.931 [2024-11-18 18:30:52.051987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.931 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.189 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.447 [2024-11-18 18:30:52.657729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.447 [2024-11-18 18:30:52.658101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.447 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.705 malloc0 00:23:54.705 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:23:55.271 [2024-11-18 18:30:53.589918] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0YiorwwFtI': 0100666 00:23:55.271 [2024-11-18 18:30:53.589992] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:55.271 request: 00:23:55.271 { 00:23:55.271 "name": "key0", 00:23:55.271 "path": "/tmp/tmp.0YiorwwFtI", 00:23:55.271 "method": "keyring_file_add_key", 00:23:55.271 "req_id": 1 00:23:55.271 } 00:23:55.271 Got JSON-RPC error response 00:23:55.271 response: 00:23:55.271 { 00:23:55.271 "code": -1, 00:23:55.271 "message": "Operation not permitted" 00:23:55.271 } 00:23:55.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.837 [2024-11-18 18:30:53.910822] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:55.837 [2024-11-18 18:30:53.910919] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:55.837 request: 00:23:55.837 { 00:23:55.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.837 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.837 "psk": "key0", 00:23:55.837 "method": "nvmf_subsystem_add_host", 00:23:55.837 "req_id": 1 00:23:55.837 } 00:23:55.837 Got JSON-RPC error response 00:23:55.837 response: 00:23:55.837 { 00:23:55.837 "code": -32603, 00:23:55.837 "message": "Internal error" 00:23:55.837 } 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3000873 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000873 ']' 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000873 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.837 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000873 00:23:55.838 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.838 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.838 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000873' 00:23:55.838 killing process with pid 3000873 00:23:55.838 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000873 00:23:55.838 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000873 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.0YiorwwFtI 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3001435 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3001435 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001435 ']' 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.212 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.212 [2024-11-18 18:30:55.343990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:57.212 [2024-11-18 18:30:55.344146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.212 [2024-11-18 18:30:55.503799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.470 [2024-11-18 18:30:55.642328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.470 [2024-11-18 18:30:55.642412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.470 [2024-11-18 18:30:55.642437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.470 [2024-11-18 18:30:55.642462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.470 [2024-11-18 18:30:55.642481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.470 [2024-11-18 18:30:55.644090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0YiorwwFtI 00:23:58.036 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.294 [2024-11-18 18:30:56.591752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.294 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.551 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.809 [2024-11-18 18:30:57.125248] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.809 [2024-11-18 18:30:57.125621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.809 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:59.375 malloc0 00:23:59.375 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:59.633 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:23:59.891 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3001853 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3001853 /var/tmp/bdevperf.sock 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001853 ']' 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.148 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.148 [2024-11-18 18:30:58.434359] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:00.148 [2024-11-18 18:30:58.434489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3001853 ] 00:24:00.406 [2024-11-18 18:30:58.565008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.406 [2024-11-18 18:30:58.683088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.341 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.341 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.341 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:24:01.341 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.598 [2024-11-18 18:30:59.880245] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.855 TLSTESTn1 00:24:01.855 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:02.129 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:02.129 "subsystems": [ 00:24:02.129 { 00:24:02.129 "subsystem": "keyring", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "keyring_file_add_key", 00:24:02.129 "params": { 00:24:02.129 "name": "key0", 00:24:02.129 "path": "/tmp/tmp.0YiorwwFtI" 00:24:02.129 } 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "iobuf", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "iobuf_set_options", 00:24:02.129 "params": { 00:24:02.129 "small_pool_count": 8192, 00:24:02.129 "large_pool_count": 1024, 00:24:02.129 "small_bufsize": 8192, 00:24:02.129 "large_bufsize": 135168, 00:24:02.129 "enable_numa": false 00:24:02.129 } 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "sock", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "sock_set_default_impl", 00:24:02.129 "params": { 00:24:02.129 "impl_name": "posix" 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "sock_impl_set_options", 00:24:02.129 "params": { 00:24:02.129 "impl_name": "ssl", 00:24:02.129 "recv_buf_size": 4096, 00:24:02.129 "send_buf_size": 4096, 00:24:02.129 "enable_recv_pipe": true, 00:24:02.129 "enable_quickack": false, 00:24:02.129 "enable_placement_id": 0, 00:24:02.129 "enable_zerocopy_send_server": true, 00:24:02.129 "enable_zerocopy_send_client": false, 00:24:02.129 "zerocopy_threshold": 0, 00:24:02.129 "tls_version": 0, 00:24:02.129 "enable_ktls": false 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "sock_impl_set_options", 00:24:02.129 "params": { 00:24:02.129 "impl_name": "posix", 00:24:02.129 "recv_buf_size": 2097152, 00:24:02.129 "send_buf_size": 2097152, 00:24:02.129 "enable_recv_pipe": true, 00:24:02.129 "enable_quickack": false, 00:24:02.129 "enable_placement_id": 0, 00:24:02.129 "enable_zerocopy_send_server": true, 00:24:02.129 "enable_zerocopy_send_client": false, 00:24:02.129 "zerocopy_threshold": 0, 00:24:02.129 "tls_version": 0, 00:24:02.129 "enable_ktls": false 00:24:02.129 } 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "vmd", 00:24:02.129 "config": [] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "accel", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "accel_set_options", 00:24:02.129 "params": { 00:24:02.129 "small_cache_size": 128, 00:24:02.129 "large_cache_size": 16, 00:24:02.129 "task_count": 2048, 00:24:02.129 "sequence_count": 2048, 00:24:02.129 "buf_count": 2048 00:24:02.129 } 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "bdev", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "bdev_set_options", 00:24:02.129 "params": { 00:24:02.129 "bdev_io_pool_size": 65535, 00:24:02.129 "bdev_io_cache_size": 256, 00:24:02.129 "bdev_auto_examine": true, 00:24:02.129 "iobuf_small_cache_size": 128, 00:24:02.129 "iobuf_large_cache_size": 16 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_raid_set_options", 00:24:02.129 "params": { 00:24:02.129 "process_window_size_kb": 1024, 00:24:02.129 "process_max_bandwidth_mb_sec": 0 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_iscsi_set_options", 00:24:02.129 "params": { 00:24:02.129 "timeout_sec": 30 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_nvme_set_options", 00:24:02.129 "params": { 00:24:02.129 "action_on_timeout": "none", 00:24:02.129 "timeout_us": 0, 00:24:02.129 "timeout_admin_us": 0, 00:24:02.129 "keep_alive_timeout_ms": 10000, 00:24:02.129 "arbitration_burst": 0, 00:24:02.129 "low_priority_weight": 0, 00:24:02.129 "medium_priority_weight": 0, 00:24:02.129 "high_priority_weight": 0, 00:24:02.129 "nvme_adminq_poll_period_us": 10000, 00:24:02.129 "nvme_ioq_poll_period_us": 0, 00:24:02.129 "io_queue_requests": 0, 00:24:02.129 "delay_cmd_submit": true, 00:24:02.129 "transport_retry_count": 4, 00:24:02.129 "bdev_retry_count": 3, 00:24:02.129 "transport_ack_timeout": 0, 00:24:02.129 "ctrlr_loss_timeout_sec": 0, 00:24:02.129 "reconnect_delay_sec": 0, 00:24:02.129 "fast_io_fail_timeout_sec": 0, 00:24:02.129 "disable_auto_failback": false, 00:24:02.129 "generate_uuids": false, 00:24:02.129 "transport_tos": 0, 00:24:02.129 "nvme_error_stat": false, 00:24:02.129 "rdma_srq_size": 0, 00:24:02.129 "io_path_stat": false, 00:24:02.129 "allow_accel_sequence": false, 00:24:02.129 "rdma_max_cq_size": 0, 00:24:02.129 "rdma_cm_event_timeout_ms": 0, 00:24:02.129 "dhchap_digests": [ 00:24:02.129 "sha256", 00:24:02.129 "sha384", 00:24:02.129 "sha512" 00:24:02.129 ], 00:24:02.129 "dhchap_dhgroups": [ 00:24:02.129 "null", 00:24:02.129 "ffdhe2048", 00:24:02.129 "ffdhe3072", 00:24:02.129 "ffdhe4096", 00:24:02.129 "ffdhe6144", 00:24:02.129 "ffdhe8192" 00:24:02.129 ] 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_nvme_set_hotplug", 00:24:02.129 "params": { 00:24:02.129 "period_us": 100000, 00:24:02.129 "enable": false 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_malloc_create", 00:24:02.129 "params": { 00:24:02.129 "name": "malloc0", 00:24:02.129 "num_blocks": 8192, 00:24:02.129 "block_size": 4096, 00:24:02.129 "physical_block_size": 4096, 00:24:02.129 "uuid": "bb423b49-3836-441e-8ce7-ef5fe4eaa8f6", 00:24:02.129 "optimal_io_boundary": 0, 00:24:02.129 "md_size": 0, 00:24:02.129 "dif_type": 0, 00:24:02.129 "dif_is_head_of_md": false, 00:24:02.129 "dif_pi_format": 0 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "bdev_wait_for_examine" 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "nbd", 00:24:02.129 "config": [] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "scheduler", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "framework_set_scheduler", 00:24:02.129 "params": { 00:24:02.129 "name": "static" 00:24:02.129 } 00:24:02.129 } 00:24:02.129 ] 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "subsystem": "nvmf", 00:24:02.129 "config": [ 00:24:02.129 { 00:24:02.129 "method": "nvmf_set_config", 00:24:02.129 "params": { 00:24:02.129 "discovery_filter": "match_any", 00:24:02.129 "admin_cmd_passthru": { 00:24:02.129 "identify_ctrlr": false 00:24:02.129 }, 00:24:02.129 "dhchap_digests": [ 00:24:02.129 "sha256", 00:24:02.129 "sha384", 00:24:02.129 "sha512" 00:24:02.129 ], 00:24:02.129 "dhchap_dhgroups": [ 00:24:02.129 "null", 00:24:02.129 "ffdhe2048", 00:24:02.129 "ffdhe3072", 00:24:02.129 "ffdhe4096", 00:24:02.129 "ffdhe6144", 00:24:02.129 "ffdhe8192" 00:24:02.129 ] 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "nvmf_set_max_subsystems", 00:24:02.129 "params": { 00:24:02.129 "max_subsystems": 1024 00:24:02.129 } 00:24:02.129 }, 00:24:02.129 { 00:24:02.129 "method": "nvmf_set_crdt", 00:24:02.130 "params": { 00:24:02.130 "crdt1": 0, 00:24:02.130 "crdt2": 0, 00:24:02.130 "crdt3": 0 00:24:02.130 } 00:24:02.130 }, 00:24:02.130 { 00:24:02.130 "method": "nvmf_create_transport", 00:24:02.130 "params": { 00:24:02.130 "trtype": "TCP", 00:24:02.130 "max_queue_depth": 128, 00:24:02.130 "max_io_qpairs_per_ctrlr": 127, 00:24:02.130 "in_capsule_data_size": 4096, 00:24:02.130 "max_io_size": 131072, 00:24:02.130 "io_unit_size": 131072, 00:24:02.130 "max_aq_depth": 128, 00:24:02.130 "num_shared_buffers": 511, 00:24:02.130 "buf_cache_size": 4294967295, 00:24:02.130 "dif_insert_or_strip": false, 00:24:02.130 "zcopy": false, 00:24:02.130 "c2h_success": false, 00:24:02.130 "sock_priority": 0, 00:24:02.130 "abort_timeout_sec": 1, 00:24:02.130 "ack_timeout": 0, 00:24:02.130 "data_wr_pool_size": 0 00:24:02.130 } 00:24:02.130 }, 00:24:02.130 { 00:24:02.130 "method": "nvmf_create_subsystem", 00:24:02.130 "params": { 00:24:02.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.130 "allow_any_host": false, 00:24:02.130 "serial_number": "SPDK00000000000001", 00:24:02.130 "model_number": "SPDK bdev Controller", 00:24:02.130 "max_namespaces": 10, 00:24:02.130 "min_cntlid": 1, 00:24:02.130 "max_cntlid": 65519, 00:24:02.130 "ana_reporting": false 00:24:02.130 } 00:24:02.130 }, 00:24:02.130 { 00:24:02.130 "method": "nvmf_subsystem_add_host", 00:24:02.130 "params": { 00:24:02.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.130 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.130 "psk": "key0" 00:24:02.130 } 00:24:02.130 }, 00:24:02.130 { 00:24:02.130 "method": "nvmf_subsystem_add_ns", 00:24:02.130 "params": { 00:24:02.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.130 "namespace": { 00:24:02.130 "nsid": 1, 00:24:02.130 "bdev_name": "malloc0", 00:24:02.130 "nguid": "BB423B493836441E8CE7EF5FE4EAA8F6", 00:24:02.130 "uuid": "bb423b49-3836-441e-8ce7-ef5fe4eaa8f6", 00:24:02.130 "no_auto_visible": false 00:24:02.130 } 00:24:02.130 } 00:24:02.130 }, 00:24:02.130 { 00:24:02.130 "method": "nvmf_subsystem_add_listener", 00:24:02.130 "params": { 00:24:02.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.130 "listen_address": { 00:24:02.130 "trtype": "TCP", 00:24:02.130 "adrfam": "IPv4", 00:24:02.130 "traddr": "10.0.0.2", 00:24:02.130 "trsvcid": "4420" 00:24:02.130 }, 00:24:02.130 "secure_channel": true 00:24:02.130 } 00:24:02.130 } 00:24:02.130 ] 00:24:02.130 } 00:24:02.130 ] 00:24:02.130 }' 00:24:02.130 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:02.388 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:02.388 "subsystems": [ 00:24:02.388 { 00:24:02.388 "subsystem": "keyring", 00:24:02.388 "config": [ 00:24:02.388 { 00:24:02.388 "method": "keyring_file_add_key", 00:24:02.388 "params": { 00:24:02.388 "name": "key0", 00:24:02.388 "path": "/tmp/tmp.0YiorwwFtI" 00:24:02.388 } 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "subsystem": "iobuf", 00:24:02.388 "config": [ 00:24:02.388 { 00:24:02.388 "method": "iobuf_set_options", 00:24:02.388 "params": { 00:24:02.388 "small_pool_count": 8192, 00:24:02.388 "large_pool_count": 1024, 00:24:02.388 "small_bufsize": 8192, 00:24:02.388 "large_bufsize": 135168, 00:24:02.388 "enable_numa": false 00:24:02.388 } 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "subsystem": "sock", 00:24:02.388 "config": [ 00:24:02.388 { 00:24:02.388 "method": "sock_set_default_impl", 00:24:02.388 "params": { 00:24:02.388 "impl_name": "posix" 00:24:02.388 } 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "method": "sock_impl_set_options", 00:24:02.388 "params": { 00:24:02.388 "impl_name": "ssl", 00:24:02.388 "recv_buf_size": 4096, 00:24:02.388 "send_buf_size": 4096, 00:24:02.388 "enable_recv_pipe": true, 00:24:02.388 "enable_quickack": false, 00:24:02.388 "enable_placement_id": 0, 00:24:02.388 "enable_zerocopy_send_server": true, 00:24:02.388 "enable_zerocopy_send_client": false, 00:24:02.388 "zerocopy_threshold": 0, 00:24:02.388 "tls_version": 0, 00:24:02.388 "enable_ktls": false 00:24:02.388 } 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "method": "sock_impl_set_options", 00:24:02.388 "params": { 00:24:02.388 "impl_name": "posix", 00:24:02.388 "recv_buf_size": 2097152, 00:24:02.388 "send_buf_size": 2097152, 00:24:02.388 "enable_recv_pipe": true, 00:24:02.388 "enable_quickack": false, 00:24:02.388 "enable_placement_id": 0, 00:24:02.388 "enable_zerocopy_send_server": true, 00:24:02.388 "enable_zerocopy_send_client": false, 00:24:02.388 "zerocopy_threshold": 0, 00:24:02.388 "tls_version": 0, 00:24:02.388 "enable_ktls": false 00:24:02.388 } 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "subsystem": "vmd", 00:24:02.388 "config": [] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "subsystem": "accel", 00:24:02.388 "config": [ 00:24:02.388 { 00:24:02.388 "method": "accel_set_options", 00:24:02.388 "params": { 00:24:02.388 "small_cache_size": 128, 00:24:02.388 "large_cache_size": 16, 00:24:02.388 "task_count": 2048, 00:24:02.388 "sequence_count": 2048, 00:24:02.388 "buf_count": 2048 00:24:02.388 } 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "subsystem": "bdev", 00:24:02.388 "config": [ 00:24:02.388 { 00:24:02.388 "method": "bdev_set_options", 00:24:02.388 "params": { 00:24:02.388 "bdev_io_pool_size": 65535, 00:24:02.388 "bdev_io_cache_size": 256, 00:24:02.388 "bdev_auto_examine": true, 00:24:02.388 "iobuf_small_cache_size": 128, 00:24:02.388 "iobuf_large_cache_size": 16 00:24:02.388 } 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "method": "bdev_raid_set_options", 00:24:02.388 "params": { 00:24:02.388 "process_window_size_kb": 1024, 00:24:02.388 "process_max_bandwidth_mb_sec": 0 00:24:02.388 } 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "method": "bdev_iscsi_set_options", 00:24:02.388 "params": { 00:24:02.388 "timeout_sec": 30 00:24:02.388 } 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "method": "bdev_nvme_set_options", 00:24:02.388 "params": { 00:24:02.388 "action_on_timeout": "none", 00:24:02.388 "timeout_us": 0, 00:24:02.388 "timeout_admin_us": 0, 00:24:02.388 "keep_alive_timeout_ms": 10000, 00:24:02.388 "arbitration_burst": 0, 00:24:02.388 "low_priority_weight": 0, 00:24:02.388 "medium_priority_weight": 0, 00:24:02.388 "high_priority_weight": 0, 00:24:02.388 "nvme_adminq_poll_period_us": 10000, 00:24:02.388 "nvme_ioq_poll_period_us": 0, 00:24:02.388 "io_queue_requests": 512, 00:24:02.388 "delay_cmd_submit": true, 00:24:02.389 "transport_retry_count": 4, 00:24:02.389 "bdev_retry_count": 3, 00:24:02.389 "transport_ack_timeout": 0, 00:24:02.389 "ctrlr_loss_timeout_sec": 0, 00:24:02.389 "reconnect_delay_sec": 0, 00:24:02.389 "fast_io_fail_timeout_sec": 0, 00:24:02.389 "disable_auto_failback": false, 00:24:02.389 "generate_uuids": false, 00:24:02.389 "transport_tos": 0, 00:24:02.389 "nvme_error_stat": false, 00:24:02.389 "rdma_srq_size": 0, 00:24:02.389 "io_path_stat": false, 00:24:02.389 "allow_accel_sequence": false, 00:24:02.389 "rdma_max_cq_size": 0, 00:24:02.389 "rdma_cm_event_timeout_ms": 0, 00:24:02.389 "dhchap_digests": [ 00:24:02.389 "sha256", 00:24:02.389 "sha384", 00:24:02.389 "sha512" 00:24:02.389 ], 00:24:02.389 "dhchap_dhgroups": [ 00:24:02.389 "null", 00:24:02.389 "ffdhe2048", 00:24:02.389 "ffdhe3072", 00:24:02.389 "ffdhe4096", 00:24:02.389 "ffdhe6144", 00:24:02.389 "ffdhe8192" 00:24:02.389 ] 00:24:02.389 } 00:24:02.389 }, 00:24:02.389 { 00:24:02.389 "method": "bdev_nvme_attach_controller", 00:24:02.389 "params": { 00:24:02.389 "name": "TLSTEST", 00:24:02.389 "trtype": "TCP", 00:24:02.389 "adrfam": "IPv4", 00:24:02.389 "traddr": "10.0.0.2", 00:24:02.389 "trsvcid": "4420", 00:24:02.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.389 "prchk_reftag": false, 00:24:02.389 "prchk_guard": false, 00:24:02.389 "ctrlr_loss_timeout_sec": 0, 00:24:02.389 "reconnect_delay_sec": 0, 00:24:02.389 "fast_io_fail_timeout_sec": 0, 00:24:02.389 "psk": "key0", 00:24:02.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.389 "hdgst": false, 00:24:02.389 "ddgst": false, 00:24:02.389 "multipath": "multipath" 00:24:02.389 } 00:24:02.389 }, 00:24:02.389 { 00:24:02.389 "method": "bdev_nvme_set_hotplug", 00:24:02.389 "params": { 00:24:02.389 "period_us": 100000, 00:24:02.389 "enable": false 00:24:02.389 } 00:24:02.389 }, 00:24:02.389 { 00:24:02.389 "method": "bdev_wait_for_examine" 00:24:02.389 } 00:24:02.389 ] 00:24:02.389 }, 00:24:02.389 { 00:24:02.389 "subsystem": "nbd", 00:24:02.389 "config": [] 00:24:02.389 } 00:24:02.389 ] 00:24:02.389 }' 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3001853 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001853 ']' 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001853 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.389 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001853 00:24:02.647 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:02.647 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:02.647 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001853' 00:24:02.647 killing process with pid 3001853 00:24:02.647 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001853 00:24:02.647 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.647 00:24:02.647 Latency(us) 00:24:02.647 [2024-11-18T17:31:00.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.647 [2024-11-18T17:31:00.984Z] =================================================================================================================== 00:24:02.647 [2024-11-18T17:31:00.984Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:02.647 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001853 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3001435 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001435 ']' 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001435 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001435 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001435' 00:24:03.581 killing process with pid 3001435 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001435 00:24:03.581 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001435 00:24:04.958 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:04.958 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.958 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:04.958 "subsystems": [ 00:24:04.958 { 00:24:04.958 "subsystem": "keyring", 00:24:04.958 "config": [ 00:24:04.958 { 00:24:04.958 "method": "keyring_file_add_key", 00:24:04.958 "params": { 00:24:04.958 "name": "key0", 00:24:04.958 "path": "/tmp/tmp.0YiorwwFtI" 00:24:04.958 } 00:24:04.958 } 00:24:04.958 ] 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "subsystem": "iobuf", 00:24:04.958 "config": [ 00:24:04.958 { 00:24:04.958 "method": "iobuf_set_options", 00:24:04.958 "params": { 00:24:04.958 "small_pool_count": 8192, 00:24:04.958 "large_pool_count": 1024, 00:24:04.958 "small_bufsize": 8192, 00:24:04.958 "large_bufsize": 135168, 00:24:04.958 "enable_numa": false 00:24:04.958 } 00:24:04.958 } 00:24:04.958 ] 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "subsystem": "sock", 00:24:04.958 "config": [ 00:24:04.958 { 00:24:04.958 "method": "sock_set_default_impl", 00:24:04.958 "params": { 00:24:04.958 "impl_name": "posix" 00:24:04.958 } 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "method": "sock_impl_set_options", 00:24:04.958 "params": { 00:24:04.958 "impl_name": "ssl", 00:24:04.958 "recv_buf_size": 4096, 00:24:04.958 "send_buf_size": 4096, 00:24:04.958 "enable_recv_pipe": true, 00:24:04.958 "enable_quickack": false, 00:24:04.958 "enable_placement_id": 0, 00:24:04.958 "enable_zerocopy_send_server": true, 00:24:04.958 "enable_zerocopy_send_client": false, 00:24:04.958 "zerocopy_threshold": 0, 00:24:04.958 "tls_version": 0, 00:24:04.958 "enable_ktls": false 00:24:04.958 } 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "method": "sock_impl_set_options", 00:24:04.958 "params": { 00:24:04.958 "impl_name": "posix", 00:24:04.958 "recv_buf_size": 2097152, 00:24:04.958 "send_buf_size": 2097152, 00:24:04.958 "enable_recv_pipe": true, 00:24:04.958 "enable_quickack": false, 00:24:04.958 "enable_placement_id": 0, 00:24:04.958 "enable_zerocopy_send_server": true, 00:24:04.958 "enable_zerocopy_send_client": false, 00:24:04.958 "zerocopy_threshold": 0, 00:24:04.958 "tls_version": 0, 00:24:04.958 "enable_ktls": false 00:24:04.958 } 00:24:04.958 } 00:24:04.958 ] 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "subsystem": "vmd", 00:24:04.958 "config": [] 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "subsystem": "accel", 00:24:04.958 "config": [ 00:24:04.958 { 00:24:04.958 "method": "accel_set_options", 00:24:04.958 "params": { 00:24:04.958 "small_cache_size": 128, 00:24:04.958 "large_cache_size": 16, 00:24:04.958 "task_count": 2048, 00:24:04.958 "sequence_count": 2048, 00:24:04.958 "buf_count": 2048 00:24:04.958 } 00:24:04.958 } 00:24:04.958 ] 00:24:04.958 }, 00:24:04.958 { 00:24:04.958 "subsystem": "bdev", 00:24:04.958 "config": [ 00:24:04.958 { 00:24:04.958 "method": "bdev_set_options", 00:24:04.958 "params": { 00:24:04.958 "bdev_io_pool_size": 65535, 00:24:04.959 "bdev_io_cache_size": 256, 00:24:04.959 "bdev_auto_examine": true, 00:24:04.959 "iobuf_small_cache_size": 128, 00:24:04.959 "iobuf_large_cache_size": 16 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_raid_set_options", 00:24:04.959 "params": { 00:24:04.959 "process_window_size_kb": 1024, 00:24:04.959 "process_max_bandwidth_mb_sec": 0 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_iscsi_set_options", 00:24:04.959 "params": { 00:24:04.959 "timeout_sec": 30 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_nvme_set_options", 00:24:04.959 "params": { 00:24:04.959 "action_on_timeout": "none", 00:24:04.959 "timeout_us": 0, 00:24:04.959 "timeout_admin_us": 0, 00:24:04.959 "keep_alive_timeout_ms": 10000, 00:24:04.959 "arbitration_burst": 0, 00:24:04.959 "low_priority_weight": 0, 00:24:04.959 "medium_priority_weight": 0, 00:24:04.959 "high_priority_weight": 0, 00:24:04.959 "nvme_adminq_poll_period_us": 10000, 00:24:04.959 "nvme_ioq_poll_period_us": 0, 00:24:04.959 "io_queue_requests": 0, 00:24:04.959 "delay_cmd_submit": true, 00:24:04.959 "transport_retry_count": 4, 00:24:04.959 "bdev_retry_count": 3, 00:24:04.959 "transport_ack_timeout": 0, 00:24:04.959 "ctrlr_loss_timeout_sec": 0, 00:24:04.959 "reconnect_delay_sec": 0, 00:24:04.959 "fast_io_fail_timeout_sec": 0, 00:24:04.959 "disable_auto_failback": false, 00:24:04.959 "generate_uuids": false, 00:24:04.959 "transport_tos": 0, 00:24:04.959 "nvme_error_stat": false, 00:24:04.959 "rdma_srq_size": 0, 00:24:04.959 "io_path_stat": false, 00:24:04.959 "allow_accel_sequence": false, 00:24:04.959 "rdma_max_cq_size": 0, 00:24:04.959 "rdma_cm_event_timeout_ms": 0, 00:24:04.959 "dhchap_digests": [ 00:24:04.959 "sha256", 00:24:04.959 "sha384", 00:24:04.959 "sha512" 00:24:04.959 ], 00:24:04.959 "dhchap_dhgroups": [ 00:24:04.959 "null", 00:24:04.959 "ffdhe2048", 00:24:04.959 "ffdhe3072", 00:24:04.959 "ffdhe4096", 00:24:04.959 "ffdhe6144", 00:24:04.959 "ffdhe8192" 00:24:04.959 ] 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_nvme_set_hotplug", 00:24:04.959 "params": { 00:24:04.959 "period_us": 100000, 00:24:04.959 "enable": false 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_malloc_create", 00:24:04.959 "params": { 00:24:04.959 "name": "malloc0", 00:24:04.959 "num_blocks": 8192, 00:24:04.959 "block_size": 4096, 00:24:04.959 "physical_block_size": 4096, 00:24:04.959 "uuid": "bb423b49-3836-441e-8ce7-ef5fe4eaa8f6", 00:24:04.959 "optimal_io_boundary": 0, 00:24:04.959 "md_size": 0, 00:24:04.959 "dif_type": 0, 00:24:04.959 "dif_is_head_of_md": false, 00:24:04.959 "dif_pi_format": 0 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "bdev_wait_for_examine" 00:24:04.959 } 00:24:04.959 ] 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "subsystem": "nbd", 00:24:04.959 "config": [] 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "subsystem": "scheduler", 00:24:04.959 "config": [ 00:24:04.959 { 00:24:04.959 "method": "framework_set_scheduler", 00:24:04.959 "params": { 00:24:04.959 "name": "static" 00:24:04.959 } 00:24:04.959 } 00:24:04.959 ] 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "subsystem": "nvmf", 00:24:04.959 "config": [ 00:24:04.959 { 00:24:04.959 "method": "nvmf_set_config", 00:24:04.959 "params": { 00:24:04.959 "discovery_filter": "match_any", 00:24:04.959 "admin_cmd_passthru": { 00:24:04.959 "identify_ctrlr": false 00:24:04.959 }, 00:24:04.959 "dhchap_digests": [ 00:24:04.959 "sha256", 00:24:04.959 "sha384", 00:24:04.959 "sha512" 00:24:04.959 ], 00:24:04.959 "dhchap_dhgroups": [ 00:24:04.959 "null", 00:24:04.959 "ffdhe2048", 00:24:04.959 "ffdhe3072", 00:24:04.959 "ffdhe4096", 00:24:04.959 "ffdhe6144", 00:24:04.959 "ffdhe8192" 00:24:04.959 ] 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_set_max_subsystems", 00:24:04.959 "params": { 00:24:04.959 "max_subsystems": 1024 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_set_crdt", 00:24:04.959 "params": { 00:24:04.959 "crdt1": 0, 00:24:04.959 "crdt2": 0, 00:24:04.959 "crdt3": 0 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_create_transport", 00:24:04.959 "params": { 00:24:04.959 "trtype": "TCP", 00:24:04.959 "max_queue_depth": 128, 00:24:04.959 "max_io_qpairs_per_ctrlr": 127, 00:24:04.959 "in_capsule_data_size": 4096, 00:24:04.959 "max_io_size": 131072, 00:24:04.959 "io_unit_size": 131072, 00:24:04.959 "max_aq_depth": 128, 00:24:04.959 "num_shared_buffers": 511, 00:24:04.959 "buf_cache_size": 4294967295, 00:24:04.959 "dif_insert_or_strip": false, 00:24:04.959 "zcopy": false, 00:24:04.959 "c2h_success": false, 00:24:04.959 "sock_priority": 0, 00:24:04.959 "abort_timeout_sec": 1, 00:24:04.959 "ack_timeout": 0, 00:24:04.959 "data_wr_pool_size": 0 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_create_subsystem", 00:24:04.959 "params": { 00:24:04.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.959 "allow_any_host": false, 00:24:04.959 "serial_number": "SPDK00000000000001", 00:24:04.959 "model_number": "SPDK bdev Controller", 00:24:04.959 "max_namespaces": 10, 00:24:04.959 "min_cntlid": 1, 00:24:04.959 "max_cntlid": 65519, 00:24:04.959 "ana_reporting": false 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_subsystem_add_host", 00:24:04.959 "params": { 00:24:04.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.959 "host": "nqn.2016-06.io.spdk:host1", 00:24:04.959 "psk": "key0" 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_subsystem_add_ns", 00:24:04.959 "params": { 00:24:04.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.959 "namespace": { 00:24:04.959 "nsid": 1, 00:24:04.959 "bdev_name": "malloc0", 00:24:04.959 "nguid": "BB423B493836441E8CE7EF5FE4EAA8F6", 00:24:04.959 "uuid": "bb423b49-3836-441e-8ce7-ef5fe4eaa8f6", 00:24:04.959 "no_auto_visible": false 00:24:04.959 } 00:24:04.959 } 00:24:04.959 }, 00:24:04.959 { 00:24:04.959 "method": "nvmf_subsystem_add_listener", 00:24:04.959 "params": { 00:24:04.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.959 "listen_address": { 00:24:04.959 "trtype": "TCP", 00:24:04.959 "adrfam": "IPv4", 00:24:04.959 "traddr": "10.0.0.2", 00:24:04.959 "trsvcid": "4420" 00:24:04.959 }, 00:24:04.959 "secure_channel": true 00:24:04.959 } 00:24:04.959 } 00:24:04.959 ] 00:24:04.959 } 00:24:04.959 ] 00:24:04.959 }' 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002399 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002399 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002399 ']' 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.959 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.960 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.960 [2024-11-18 18:31:02.961142] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:04.960 [2024-11-18 18:31:02.961272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.960 [2024-11-18 18:31:03.107433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.960 [2024-11-18 18:31:03.243895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.960 [2024-11-18 18:31:03.243968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.960 [2024-11-18 18:31:03.243994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.960 [2024-11-18 18:31:03.244020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.960 [2024-11-18 18:31:03.244040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.960 [2024-11-18 18:31:03.245763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.526 [2024-11-18 18:31:03.795209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.526 [2024-11-18 18:31:03.827221] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.526 [2024-11-18 18:31:03.827586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3002548 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3002548 /var/tmp/bdevperf.sock 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002548 ']' 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.785 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:05.785 "subsystems": [ 00:24:05.785 { 00:24:05.785 "subsystem": "keyring", 00:24:05.785 "config": [ 00:24:05.785 { 00:24:05.785 "method": "keyring_file_add_key", 00:24:05.785 "params": { 00:24:05.785 "name": "key0", 00:24:05.785 "path": "/tmp/tmp.0YiorwwFtI" 00:24:05.785 } 00:24:05.785 } 00:24:05.785 ] 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "subsystem": "iobuf", 00:24:05.785 "config": [ 00:24:05.785 { 00:24:05.785 "method": "iobuf_set_options", 00:24:05.785 "params": { 00:24:05.785 "small_pool_count": 8192, 00:24:05.785 "large_pool_count": 1024, 00:24:05.785 "small_bufsize": 8192, 00:24:05.785 "large_bufsize": 135168, 00:24:05.785 "enable_numa": false 00:24:05.785 } 00:24:05.785 } 00:24:05.785 ] 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "subsystem": "sock", 00:24:05.785 "config": [ 00:24:05.785 { 00:24:05.785 "method": "sock_set_default_impl", 00:24:05.785 "params": { 00:24:05.785 "impl_name": "posix" 00:24:05.785 } 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "method": "sock_impl_set_options", 00:24:05.785 "params": { 00:24:05.785 "impl_name": "ssl", 00:24:05.785 "recv_buf_size": 4096, 00:24:05.785 "send_buf_size": 4096, 00:24:05.785 "enable_recv_pipe": true, 00:24:05.785 "enable_quickack": false, 00:24:05.785 "enable_placement_id": 0, 00:24:05.785 "enable_zerocopy_send_server": true, 00:24:05.785 "enable_zerocopy_send_client": false, 00:24:05.785 "zerocopy_threshold": 0, 00:24:05.785 "tls_version": 0, 00:24:05.785 "enable_ktls": false 00:24:05.785 } 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "method": "sock_impl_set_options", 00:24:05.785 "params": { 00:24:05.785 "impl_name": "posix", 00:24:05.785 "recv_buf_size": 2097152, 00:24:05.785 "send_buf_size": 2097152, 00:24:05.785 "enable_recv_pipe": true, 00:24:05.785 "enable_quickack": false, 00:24:05.785 "enable_placement_id": 0, 00:24:05.785 "enable_zerocopy_send_server": true, 00:24:05.785 "enable_zerocopy_send_client": false, 00:24:05.785 "zerocopy_threshold": 0, 00:24:05.785 "tls_version": 0, 00:24:05.785 "enable_ktls": false 00:24:05.785 } 00:24:05.785 } 00:24:05.785 ] 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "subsystem": "vmd", 00:24:05.785 "config": [] 00:24:05.785 }, 00:24:05.785 { 00:24:05.785 "subsystem": "accel", 00:24:05.785 "config": [ 00:24:05.785 { 00:24:05.785 "method": "accel_set_options", 00:24:05.785 "params": { 00:24:05.785 "small_cache_size": 128, 00:24:05.785 "large_cache_size": 16, 00:24:05.785 "task_count": 2048, 00:24:05.785 "sequence_count": 2048, 00:24:05.786 "buf_count": 2048 00:24:05.786 } 00:24:05.786 } 00:24:05.786 ] 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "subsystem": "bdev", 00:24:05.786 "config": [ 00:24:05.786 { 00:24:05.786 "method": "bdev_set_options", 00:24:05.786 "params": { 00:24:05.786 "bdev_io_pool_size": 65535, 00:24:05.786 "bdev_io_cache_size": 256, 00:24:05.786 "bdev_auto_examine": true, 00:24:05.786 "iobuf_small_cache_size": 128, 00:24:05.786 "iobuf_large_cache_size": 16 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_raid_set_options", 00:24:05.786 "params": { 00:24:05.786 "process_window_size_kb": 1024, 00:24:05.786 "process_max_bandwidth_mb_sec": 0 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_iscsi_set_options", 00:24:05.786 "params": { 00:24:05.786 "timeout_sec": 30 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_nvme_set_options", 00:24:05.786 "params": { 00:24:05.786 "action_on_timeout": "none", 00:24:05.786 "timeout_us": 0, 00:24:05.786 "timeout_admin_us": 0, 00:24:05.786 "keep_alive_timeout_ms": 10000, 00:24:05.786 "arbitration_burst": 0, 00:24:05.786 "low_priority_weight": 0, 00:24:05.786 "medium_priority_weight": 0, 00:24:05.786 "high_priority_weight": 0, 00:24:05.786 "nvme_adminq_poll_period_us": 10000, 00:24:05.786 "nvme_ioq_poll_period_us": 0, 00:24:05.786 "io_queue_requests": 512, 00:24:05.786 "delay_cmd_submit": true, 00:24:05.786 "transport_retry_count": 4, 00:24:05.786 "bdev_retry_count": 3, 00:24:05.786 "transport_ack_timeout": 0, 00:24:05.786 "ctrlr_loss_timeout_sec": 0, 00:24:05.786 "reconnect_delay_sec": 0, 00:24:05.786 "fast_io_fail_timeout_sec": 0, 00:24:05.786 "disable_auto_failback": false, 00:24:05.786 "generate_uuids": false, 00:24:05.786 "transport_tos": 0, 00:24:05.786 "nvme_error_stat": false, 00:24:05.786 "rdma_srq_size": 0, 00:24:05.786 "io_path_stat": false, 00:24:05.786 "allow_accel_sequence": false, 00:24:05.786 "rdma_max_cq_size": 0, 00:24:05.786 "rdma_cm_event_timeout_ms": 0, 00:24:05.786 "dhchap_digests": [ 00:24:05.786 "sha256", 00:24:05.786 "sha384", 00:24:05.786 "sha512" 00:24:05.786 ], 00:24:05.786 "dhchap_dhgroups": [ 00:24:05.786 "null", 00:24:05.786 "ffdhe2048", 00:24:05.786 "ffdhe3072", 00:24:05.786 "ffdhe4096", 00:24:05.786 "ffdhe6144", 00:24:05.786 "ffdhe8192" 00:24:05.786 ] 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_nvme_attach_controller", 00:24:05.786 "params": { 00:24:05.786 "name": "TLSTEST", 00:24:05.786 "trtype": "TCP", 00:24:05.786 "adrfam": "IPv4", 00:24:05.786 "traddr": "10.0.0.2", 00:24:05.786 "trsvcid": "4420", 00:24:05.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.786 "prchk_reftag": false, 00:24:05.786 "prchk_guard": false, 00:24:05.786 "ctrlr_loss_timeout_sec": 0, 00:24:05.786 "reconnect_delay_sec": 0, 00:24:05.786 "fast_io_fail_timeout_sec": 0, 00:24:05.786 "psk": "key0", 00:24:05.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.786 "hdgst": false, 00:24:05.786 "ddgst": false, 00:24:05.786 "multipath": "multipath" 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_nvme_set_hotplug", 00:24:05.786 "params": { 00:24:05.786 "period_us": 100000, 00:24:05.786 "enable": false 00:24:05.786 } 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "method": "bdev_wait_for_examine" 00:24:05.786 } 00:24:05.786 ] 00:24:05.786 }, 00:24:05.786 { 00:24:05.786 "subsystem": "nbd", 00:24:05.786 "config": [] 00:24:05.786 } 00:24:05.786 ] 00:24:05.786 }' 00:24:05.786 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.786 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.786 [2024-11-18 18:31:04.053362] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:05.786 [2024-11-18 18:31:04.053490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002548 ] 00:24:06.045 [2024-11-18 18:31:04.183946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.045 [2024-11-18 18:31:04.304747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.612 [2024-11-18 18:31:04.713545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.870 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.870 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.870 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:06.870 Running I/O for 10 seconds... 00:24:09.174 2680.00 IOPS, 10.47 MiB/s [2024-11-18T17:31:08.444Z] 2725.00 IOPS, 10.64 MiB/s [2024-11-18T17:31:09.385Z] 2748.67 IOPS, 10.74 MiB/s [2024-11-18T17:31:10.319Z] 2745.50 IOPS, 10.72 MiB/s [2024-11-18T17:31:11.253Z] 2747.60 IOPS, 10.73 MiB/s [2024-11-18T17:31:12.699Z] 2740.83 IOPS, 10.71 MiB/s [2024-11-18T17:31:13.265Z] 2742.71 IOPS, 10.71 MiB/s [2024-11-18T17:31:14.639Z] 2734.88 IOPS, 10.68 MiB/s [2024-11-18T17:31:15.572Z] 2737.11 IOPS, 10.69 MiB/s [2024-11-18T17:31:15.572Z] 2727.70 IOPS, 10.66 MiB/s 00:24:17.235 Latency(us) 00:24:17.235 [2024-11-18T17:31:15.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.235 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.235 Verification LBA range: start 0x0 length 0x2000 00:24:17.235 TLSTESTn1 : 10.03 2732.78 10.67 0.00 0.00 46745.26 8252.68 53982.25 00:24:17.235 [2024-11-18T17:31:15.572Z] =================================================================================================================== 00:24:17.235 [2024-11-18T17:31:15.572Z] Total : 2732.78 10.67 0.00 0.00 46745.26 8252.68 53982.25 00:24:17.235 { 00:24:17.235 "results": [ 00:24:17.235 { 00:24:17.235 "job": "TLSTESTn1", 00:24:17.235 "core_mask": "0x4", 00:24:17.235 "workload": "verify", 00:24:17.235 "status": "finished", 00:24:17.235 "verify_range": { 00:24:17.235 "start": 0, 00:24:17.235 "length": 8192 00:24:17.235 }, 00:24:17.235 "queue_depth": 128, 00:24:17.235 "io_size": 4096, 00:24:17.235 "runtime": 10.027885, 00:24:17.235 "iops": 2732.7796439628096, 00:24:17.235 "mibps": 10.674920484229725, 00:24:17.235 "io_failed": 0, 00:24:17.235 "io_timeout": 0, 00:24:17.235 "avg_latency_us": 46745.262259416035, 00:24:17.235 "min_latency_us": 8252.68148148148, 00:24:17.235 "max_latency_us": 53982.24592592593 00:24:17.235 } 00:24:17.235 ], 00:24:17.235 "core_count": 1 00:24:17.235 } 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3002548 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002548 ']' 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002548 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002548 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002548' 00:24:17.235 killing process with pid 3002548 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002548 00:24:17.235 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.235 00:24:17.235 Latency(us) 00:24:17.235 [2024-11-18T17:31:15.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.235 [2024-11-18T17:31:15.572Z] =================================================================================================================== 00:24:17.235 [2024-11-18T17:31:15.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.235 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002548 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3002399 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002399 ']' 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002399 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.800 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002399 00:24:18.058 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.058 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.058 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002399' 00:24:18.058 killing process with pid 3002399 00:24:18.058 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002399 00:24:18.058 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002399 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3004137 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3004137 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004137 ']' 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.431 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 [2024-11-18 18:31:17.529513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:19.431 [2024-11-18 18:31:17.529667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.431 [2024-11-18 18:31:17.680030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.689 [2024-11-18 18:31:17.815283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.689 [2024-11-18 18:31:17.815367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.689 [2024-11-18 18:31:17.815393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.689 [2024-11-18 18:31:17.815418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.689 [2024-11-18 18:31:17.815438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.689 [2024-11-18 18:31:17.817041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.0YiorwwFtI 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0YiorwwFtI 00:24:20.255 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.512 [2024-11-18 18:31:18.744628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.512 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:20.769 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.028 [2024-11-18 18:31:19.258001] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.028 [2024-11-18 18:31:19.258350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.028 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.285 malloc0 00:24:21.285 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:21.542 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:24:21.800 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3004431 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3004431 /var/tmp/bdevperf.sock 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004431 ']' 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.058 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.316 [2024-11-18 18:31:20.469773] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:22.316 [2024-11-18 18:31:20.469924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004431 ] 00:24:22.316 [2024-11-18 18:31:20.620931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.574 [2024-11-18 18:31:20.760564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.140 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.140 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.140 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:24:23.398 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:23.656 [2024-11-18 18:31:21.975179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.914 nvme0n1 00:24:23.914 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.914 Running I/O for 1 seconds... 00:24:25.288 2402.00 IOPS, 9.38 MiB/s 00:24:25.288 Latency(us) 00:24:25.288 [2024-11-18T17:31:23.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.288 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:25.288 Verification LBA range: start 0x0 length 0x2000 00:24:25.288 nvme0n1 : 1.03 2458.76 9.60 0.00 0.00 51404.50 9272.13 42331.40 00:24:25.288 [2024-11-18T17:31:23.625Z] =================================================================================================================== 00:24:25.288 [2024-11-18T17:31:23.625Z] Total : 2458.76 9.60 0.00 0.00 51404.50 9272.13 42331.40 00:24:25.288 { 00:24:25.288 "results": [ 00:24:25.288 { 00:24:25.288 "job": "nvme0n1", 00:24:25.288 "core_mask": "0x2", 00:24:25.288 "workload": "verify", 00:24:25.288 "status": "finished", 00:24:25.288 "verify_range": { 00:24:25.288 "start": 0, 00:24:25.288 "length": 8192 00:24:25.288 }, 00:24:25.288 "queue_depth": 128, 00:24:25.288 "io_size": 4096, 00:24:25.288 "runtime": 1.02938, 00:24:25.288 "iops": 2458.761584643183, 00:24:25.288 "mibps": 9.604537440012434, 00:24:25.288 "io_failed": 0, 00:24:25.288 "io_timeout": 0, 00:24:25.288 "avg_latency_us": 51404.4985106165, 00:24:25.288 "min_latency_us": 9272.13037037037, 00:24:25.288 "max_latency_us": 42331.40148148148 00:24:25.288 } 00:24:25.288 ], 00:24:25.288 "core_count": 1 00:24:25.288 } 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3004431 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004431 ']' 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004431 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004431 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004431' 00:24:25.288 killing process with pid 3004431 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004431 00:24:25.288 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.288 00:24:25.288 Latency(us) 00:24:25.288 [2024-11-18T17:31:23.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.288 [2024-11-18T17:31:23.625Z] =================================================================================================================== 00:24:25.288 [2024-11-18T17:31:23.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.288 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004431 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3004137 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004137 ']' 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004137 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004137 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004137' 00:24:25.855 killing process with pid 3004137 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004137 00:24:25.855 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004137 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005100 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005100 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005100 ']' 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.229 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 [2024-11-18 18:31:25.539918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:27.229 [2024-11-18 18:31:25.540057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.486 [2024-11-18 18:31:25.688072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.744 [2024-11-18 18:31:25.825202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.744 [2024-11-18 18:31:25.825287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.744 [2024-11-18 18:31:25.825313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.744 [2024-11-18 18:31:25.825339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.744 [2024-11-18 18:31:25.825373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.744 [2024-11-18 18:31:25.827033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.310 [2024-11-18 18:31:26.552333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.310 malloc0 00:24:28.310 [2024-11-18 18:31:26.615486] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.310 [2024-11-18 18:31:26.615924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3005255 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3005255 /var/tmp/bdevperf.sock 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005255 ']' 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.310 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.568 [2024-11-18 18:31:26.725401] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:28.568 [2024-11-18 18:31:26.725550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005255 ] 00:24:28.568 [2024-11-18 18:31:26.867443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.825 [2024-11-18 18:31:27.003787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.391 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.391 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.391 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0YiorwwFtI 00:24:29.649 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:30.214 [2024-11-18 18:31:28.267526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.214 nvme0n1 00:24:30.214 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:30.214 Running I/O for 1 seconds... 00:24:31.586 2431.00 IOPS, 9.50 MiB/s 00:24:31.586 Latency(us) 00:24:31.586 [2024-11-18T17:31:29.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.586 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:31.586 Verification LBA range: start 0x0 length 0x2000 00:24:31.586 nvme0n1 : 1.03 2481.77 9.69 0.00 0.00 50872.90 8543.95 50875.35 00:24:31.586 [2024-11-18T17:31:29.923Z] =================================================================================================================== 00:24:31.586 [2024-11-18T17:31:29.923Z] Total : 2481.77 9.69 0.00 0.00 50872.90 8543.95 50875.35 00:24:31.586 { 00:24:31.586 "results": [ 00:24:31.586 { 00:24:31.586 "job": "nvme0n1", 00:24:31.586 "core_mask": "0x2", 00:24:31.586 "workload": "verify", 00:24:31.586 "status": "finished", 00:24:31.586 "verify_range": { 00:24:31.586 "start": 0, 00:24:31.586 "length": 8192 00:24:31.586 }, 00:24:31.586 "queue_depth": 128, 00:24:31.586 "io_size": 4096, 00:24:31.586 "runtime": 1.03112, 00:24:31.586 "iops": 2481.767398556909, 00:24:31.586 "mibps": 9.694403900612926, 00:24:31.586 "io_failed": 0, 00:24:31.586 "io_timeout": 0, 00:24:31.586 "avg_latency_us": 50872.89741073625, 00:24:31.586 "min_latency_us": 8543.952592592592, 00:24:31.586 "max_latency_us": 50875.35407407407 00:24:31.586 } 00:24:31.586 ], 00:24:31.586 "core_count": 1 00:24:31.586 } 00:24:31.586 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:31.586 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.586 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.586 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.586 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:31.586 "subsystems": [ 00:24:31.586 { 00:24:31.586 "subsystem": "keyring", 00:24:31.586 "config": [ 00:24:31.586 { 00:24:31.586 "method": "keyring_file_add_key", 00:24:31.586 "params": { 00:24:31.586 "name": "key0", 00:24:31.586 "path": "/tmp/tmp.0YiorwwFtI" 00:24:31.586 } 00:24:31.586 } 00:24:31.586 ] 00:24:31.586 }, 00:24:31.586 { 00:24:31.586 "subsystem": "iobuf", 00:24:31.586 "config": [ 00:24:31.586 { 00:24:31.586 "method": "iobuf_set_options", 00:24:31.586 "params": { 00:24:31.586 "small_pool_count": 8192, 00:24:31.586 "large_pool_count": 1024, 00:24:31.586 "small_bufsize": 8192, 00:24:31.586 "large_bufsize": 135168, 00:24:31.586 "enable_numa": false 00:24:31.586 } 00:24:31.586 } 00:24:31.586 ] 00:24:31.586 }, 00:24:31.586 { 00:24:31.586 "subsystem": "sock", 00:24:31.586 "config": [ 00:24:31.586 { 00:24:31.586 "method": "sock_set_default_impl", 00:24:31.586 "params": { 00:24:31.586 "impl_name": "posix" 00:24:31.586 } 00:24:31.586 }, 00:24:31.586 { 00:24:31.586 "method": "sock_impl_set_options", 00:24:31.586 "params": { 00:24:31.586 "impl_name": "ssl", 00:24:31.586 "recv_buf_size": 4096, 00:24:31.586 "send_buf_size": 4096, 00:24:31.586 "enable_recv_pipe": true, 00:24:31.586 "enable_quickack": false, 00:24:31.586 "enable_placement_id": 0, 00:24:31.586 "enable_zerocopy_send_server": true, 00:24:31.586 "enable_zerocopy_send_client": false, 00:24:31.586 "zerocopy_threshold": 0, 00:24:31.586 "tls_version": 0, 00:24:31.586 "enable_ktls": false 00:24:31.586 } 00:24:31.586 }, 00:24:31.586 { 00:24:31.586 "method": "sock_impl_set_options", 00:24:31.586 "params": { 00:24:31.586 "impl_name": "posix", 00:24:31.586 "recv_buf_size": 2097152, 00:24:31.586 "send_buf_size": 2097152, 00:24:31.586 "enable_recv_pipe": true, 00:24:31.586 "enable_quickack": false, 00:24:31.587 "enable_placement_id": 0, 00:24:31.587 "enable_zerocopy_send_server": true, 00:24:31.587 "enable_zerocopy_send_client": false, 00:24:31.587 "zerocopy_threshold": 0, 00:24:31.587 "tls_version": 0, 00:24:31.587 "enable_ktls": false 00:24:31.587 } 00:24:31.587 } 00:24:31.587 ] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "vmd", 00:24:31.587 "config": [] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "accel", 00:24:31.587 "config": [ 00:24:31.587 { 00:24:31.587 "method": "accel_set_options", 00:24:31.587 "params": { 00:24:31.587 "small_cache_size": 128, 00:24:31.587 "large_cache_size": 16, 00:24:31.587 "task_count": 2048, 00:24:31.587 "sequence_count": 2048, 00:24:31.587 "buf_count": 2048 00:24:31.587 } 00:24:31.587 } 00:24:31.587 ] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "bdev", 00:24:31.587 "config": [ 00:24:31.587 { 00:24:31.587 "method": "bdev_set_options", 00:24:31.587 "params": { 00:24:31.587 "bdev_io_pool_size": 65535, 00:24:31.587 "bdev_io_cache_size": 256, 00:24:31.587 "bdev_auto_examine": true, 00:24:31.587 "iobuf_small_cache_size": 128, 00:24:31.587 "iobuf_large_cache_size": 16 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_raid_set_options", 00:24:31.587 "params": { 00:24:31.587 "process_window_size_kb": 1024, 00:24:31.587 "process_max_bandwidth_mb_sec": 0 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_iscsi_set_options", 00:24:31.587 "params": { 00:24:31.587 "timeout_sec": 30 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_nvme_set_options", 00:24:31.587 "params": { 00:24:31.587 "action_on_timeout": "none", 00:24:31.587 "timeout_us": 0, 00:24:31.587 "timeout_admin_us": 0, 00:24:31.587 "keep_alive_timeout_ms": 10000, 00:24:31.587 "arbitration_burst": 0, 00:24:31.587 "low_priority_weight": 0, 00:24:31.587 "medium_priority_weight": 0, 00:24:31.587 "high_priority_weight": 0, 00:24:31.587 "nvme_adminq_poll_period_us": 10000, 00:24:31.587 "nvme_ioq_poll_period_us": 0, 00:24:31.587 "io_queue_requests": 0, 00:24:31.587 "delay_cmd_submit": true, 00:24:31.587 "transport_retry_count": 4, 00:24:31.587 "bdev_retry_count": 3, 00:24:31.587 "transport_ack_timeout": 0, 00:24:31.587 "ctrlr_loss_timeout_sec": 0, 00:24:31.587 "reconnect_delay_sec": 0, 00:24:31.587 "fast_io_fail_timeout_sec": 0, 00:24:31.587 "disable_auto_failback": false, 00:24:31.587 "generate_uuids": false, 00:24:31.587 "transport_tos": 0, 00:24:31.587 "nvme_error_stat": false, 00:24:31.587 "rdma_srq_size": 0, 00:24:31.587 "io_path_stat": false, 00:24:31.587 "allow_accel_sequence": false, 00:24:31.587 "rdma_max_cq_size": 0, 00:24:31.587 "rdma_cm_event_timeout_ms": 0, 00:24:31.587 "dhchap_digests": [ 00:24:31.587 "sha256", 00:24:31.587 "sha384", 00:24:31.587 "sha512" 00:24:31.587 ], 00:24:31.587 "dhchap_dhgroups": [ 00:24:31.587 "null", 00:24:31.587 "ffdhe2048", 00:24:31.587 "ffdhe3072", 00:24:31.587 "ffdhe4096", 00:24:31.587 "ffdhe6144", 00:24:31.587 "ffdhe8192" 00:24:31.587 ] 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_nvme_set_hotplug", 00:24:31.587 "params": { 00:24:31.587 "period_us": 100000, 00:24:31.587 "enable": false 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_malloc_create", 00:24:31.587 "params": { 00:24:31.587 "name": "malloc0", 00:24:31.587 "num_blocks": 8192, 00:24:31.587 "block_size": 4096, 00:24:31.587 "physical_block_size": 4096, 00:24:31.587 "uuid": "1b32e72e-0d65-4ae9-910b-a774fc4413f7", 00:24:31.587 "optimal_io_boundary": 0, 00:24:31.587 "md_size": 0, 00:24:31.587 "dif_type": 0, 00:24:31.587 "dif_is_head_of_md": false, 00:24:31.587 "dif_pi_format": 0 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "bdev_wait_for_examine" 00:24:31.587 } 00:24:31.587 ] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "nbd", 00:24:31.587 "config": [] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "scheduler", 00:24:31.587 "config": [ 00:24:31.587 { 00:24:31.587 "method": "framework_set_scheduler", 00:24:31.587 "params": { 00:24:31.587 "name": "static" 00:24:31.587 } 00:24:31.587 } 00:24:31.587 ] 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "subsystem": "nvmf", 00:24:31.587 "config": [ 00:24:31.587 { 00:24:31.587 "method": "nvmf_set_config", 00:24:31.587 "params": { 00:24:31.587 "discovery_filter": "match_any", 00:24:31.587 "admin_cmd_passthru": { 00:24:31.587 "identify_ctrlr": false 00:24:31.587 }, 00:24:31.587 "dhchap_digests": [ 00:24:31.587 "sha256", 00:24:31.587 "sha384", 00:24:31.587 "sha512" 00:24:31.587 ], 00:24:31.587 "dhchap_dhgroups": [ 00:24:31.587 "null", 00:24:31.587 "ffdhe2048", 00:24:31.587 "ffdhe3072", 00:24:31.587 "ffdhe4096", 00:24:31.587 "ffdhe6144", 00:24:31.587 "ffdhe8192" 00:24:31.587 ] 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "nvmf_set_max_subsystems", 00:24:31.587 "params": { 00:24:31.587 "max_subsystems": 1024 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "nvmf_set_crdt", 00:24:31.587 "params": { 00:24:31.587 "crdt1": 0, 00:24:31.587 "crdt2": 0, 00:24:31.587 "crdt3": 0 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "nvmf_create_transport", 00:24:31.587 "params": { 00:24:31.587 "trtype": "TCP", 00:24:31.587 "max_queue_depth": 128, 00:24:31.587 "max_io_qpairs_per_ctrlr": 127, 00:24:31.587 "in_capsule_data_size": 4096, 00:24:31.587 "max_io_size": 131072, 00:24:31.587 "io_unit_size": 131072, 00:24:31.587 "max_aq_depth": 128, 00:24:31.587 "num_shared_buffers": 511, 00:24:31.587 "buf_cache_size": 4294967295, 00:24:31.587 "dif_insert_or_strip": false, 00:24:31.587 "zcopy": false, 00:24:31.587 "c2h_success": false, 00:24:31.587 "sock_priority": 0, 00:24:31.587 "abort_timeout_sec": 1, 00:24:31.587 "ack_timeout": 0, 00:24:31.587 "data_wr_pool_size": 0 00:24:31.587 } 00:24:31.587 }, 00:24:31.587 { 00:24:31.587 "method": "nvmf_create_subsystem", 00:24:31.587 "params": { 00:24:31.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.587 "allow_any_host": false, 00:24:31.587 "serial_number": "00000000000000000000", 00:24:31.587 "model_number": "SPDK bdev Controller", 00:24:31.587 "max_namespaces": 32, 00:24:31.587 "min_cntlid": 1, 00:24:31.587 "max_cntlid": 65519, 00:24:31.587 "ana_reporting": false 00:24:31.587 } 00:24:31.587 }, 00:24:31.588 { 00:24:31.588 "method": "nvmf_subsystem_add_host", 00:24:31.588 "params": { 00:24:31.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.588 "host": "nqn.2016-06.io.spdk:host1", 00:24:31.588 "psk": "key0" 00:24:31.588 } 00:24:31.588 }, 00:24:31.588 { 00:24:31.588 "method": "nvmf_subsystem_add_ns", 00:24:31.588 "params": { 00:24:31.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.588 "namespace": { 00:24:31.588 "nsid": 1, 00:24:31.588 "bdev_name": "malloc0", 00:24:31.588 "nguid": "1B32E72E0D654AE9910BA774FC4413F7", 00:24:31.588 "uuid": "1b32e72e-0d65-4ae9-910b-a774fc4413f7", 00:24:31.588 "no_auto_visible": false 00:24:31.588 } 00:24:31.588 } 00:24:31.588 }, 00:24:31.588 { 00:24:31.588 "method": "nvmf_subsystem_add_listener", 00:24:31.588 "params": { 00:24:31.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.588 "listen_address": { 00:24:31.588 "trtype": "TCP", 00:24:31.588 "adrfam": "IPv4", 00:24:31.588 "traddr": "10.0.0.2", 00:24:31.588 "trsvcid": "4420" 00:24:31.588 }, 00:24:31.588 "secure_channel": false, 00:24:31.588 "sock_impl": "ssl" 00:24:31.588 } 00:24:31.588 } 00:24:31.588 ] 00:24:31.588 } 00:24:31.588 ] 00:24:31.588 }' 00:24:31.588 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:31.846 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:31.846 "subsystems": [ 00:24:31.846 { 00:24:31.846 "subsystem": "keyring", 00:24:31.846 "config": [ 00:24:31.846 { 00:24:31.846 "method": "keyring_file_add_key", 00:24:31.846 "params": { 00:24:31.846 "name": "key0", 00:24:31.846 "path": "/tmp/tmp.0YiorwwFtI" 00:24:31.846 } 00:24:31.846 } 00:24:31.846 ] 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "subsystem": "iobuf", 00:24:31.846 "config": [ 00:24:31.846 { 00:24:31.846 "method": "iobuf_set_options", 00:24:31.846 "params": { 00:24:31.846 "small_pool_count": 8192, 00:24:31.846 "large_pool_count": 1024, 00:24:31.846 "small_bufsize": 8192, 00:24:31.846 "large_bufsize": 135168, 00:24:31.846 "enable_numa": false 00:24:31.846 } 00:24:31.846 } 00:24:31.846 ] 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "subsystem": "sock", 00:24:31.846 "config": [ 00:24:31.846 { 00:24:31.846 "method": "sock_set_default_impl", 00:24:31.846 "params": { 00:24:31.846 "impl_name": "posix" 00:24:31.846 } 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "method": "sock_impl_set_options", 00:24:31.846 "params": { 00:24:31.846 "impl_name": "ssl", 00:24:31.846 "recv_buf_size": 4096, 00:24:31.846 "send_buf_size": 4096, 00:24:31.846 "enable_recv_pipe": true, 00:24:31.846 "enable_quickack": false, 00:24:31.846 "enable_placement_id": 0, 00:24:31.846 "enable_zerocopy_send_server": true, 00:24:31.846 "enable_zerocopy_send_client": false, 00:24:31.846 "zerocopy_threshold": 0, 00:24:31.846 "tls_version": 0, 00:24:31.846 "enable_ktls": false 00:24:31.846 } 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "method": "sock_impl_set_options", 00:24:31.846 "params": { 00:24:31.846 "impl_name": "posix", 00:24:31.846 "recv_buf_size": 2097152, 00:24:31.846 "send_buf_size": 2097152, 00:24:31.846 "enable_recv_pipe": true, 00:24:31.846 "enable_quickack": false, 00:24:31.846 "enable_placement_id": 0, 00:24:31.846 "enable_zerocopy_send_server": true, 00:24:31.846 "enable_zerocopy_send_client": false, 00:24:31.846 "zerocopy_threshold": 0, 00:24:31.846 "tls_version": 0, 00:24:31.846 "enable_ktls": false 00:24:31.846 } 00:24:31.846 } 00:24:31.846 ] 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "subsystem": "vmd", 00:24:31.846 "config": [] 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "subsystem": "accel", 00:24:31.846 "config": [ 00:24:31.846 { 00:24:31.846 "method": "accel_set_options", 00:24:31.846 "params": { 00:24:31.846 "small_cache_size": 128, 00:24:31.846 "large_cache_size": 16, 00:24:31.846 "task_count": 2048, 00:24:31.846 "sequence_count": 2048, 00:24:31.846 "buf_count": 2048 00:24:31.846 } 00:24:31.846 } 00:24:31.846 ] 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "subsystem": "bdev", 00:24:31.846 "config": [ 00:24:31.846 { 00:24:31.846 "method": "bdev_set_options", 00:24:31.846 "params": { 00:24:31.846 "bdev_io_pool_size": 65535, 00:24:31.846 "bdev_io_cache_size": 256, 00:24:31.846 "bdev_auto_examine": true, 00:24:31.846 "iobuf_small_cache_size": 128, 00:24:31.846 "iobuf_large_cache_size": 16 00:24:31.846 } 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "method": "bdev_raid_set_options", 00:24:31.846 "params": { 00:24:31.846 "process_window_size_kb": 1024, 00:24:31.846 "process_max_bandwidth_mb_sec": 0 00:24:31.846 } 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "method": "bdev_iscsi_set_options", 00:24:31.846 "params": { 00:24:31.846 "timeout_sec": 30 00:24:31.846 } 00:24:31.846 }, 00:24:31.846 { 00:24:31.846 "method": "bdev_nvme_set_options", 00:24:31.846 "params": { 00:24:31.846 "action_on_timeout": "none", 00:24:31.846 "timeout_us": 0, 00:24:31.846 "timeout_admin_us": 0, 00:24:31.846 "keep_alive_timeout_ms": 10000, 00:24:31.846 "arbitration_burst": 0, 00:24:31.846 "low_priority_weight": 0, 00:24:31.846 "medium_priority_weight": 0, 00:24:31.846 "high_priority_weight": 0, 00:24:31.846 "nvme_adminq_poll_period_us": 10000, 00:24:31.846 "nvme_ioq_poll_period_us": 0, 00:24:31.846 "io_queue_requests": 512, 00:24:31.846 "delay_cmd_submit": true, 00:24:31.846 "transport_retry_count": 4, 00:24:31.846 "bdev_retry_count": 3, 00:24:31.846 "transport_ack_timeout": 0, 00:24:31.846 "ctrlr_loss_timeout_sec": 0, 00:24:31.846 "reconnect_delay_sec": 0, 00:24:31.846 "fast_io_fail_timeout_sec": 0, 00:24:31.847 "disable_auto_failback": false, 00:24:31.847 "generate_uuids": false, 00:24:31.847 "transport_tos": 0, 00:24:31.847 "nvme_error_stat": false, 00:24:31.847 "rdma_srq_size": 0, 00:24:31.847 "io_path_stat": false, 00:24:31.847 "allow_accel_sequence": false, 00:24:31.847 "rdma_max_cq_size": 0, 00:24:31.847 "rdma_cm_event_timeout_ms": 0, 00:24:31.847 "dhchap_digests": [ 00:24:31.847 "sha256", 00:24:31.847 "sha384", 00:24:31.847 "sha512" 00:24:31.847 ], 00:24:31.847 "dhchap_dhgroups": [ 00:24:31.847 "null", 00:24:31.847 "ffdhe2048", 00:24:31.847 "ffdhe3072", 00:24:31.847 "ffdhe4096", 00:24:31.847 "ffdhe6144", 00:24:31.847 "ffdhe8192" 00:24:31.847 ] 00:24:31.847 } 00:24:31.847 }, 00:24:31.847 { 00:24:31.847 "method": "bdev_nvme_attach_controller", 00:24:31.847 "params": { 00:24:31.847 "name": "nvme0", 00:24:31.847 "trtype": "TCP", 00:24:31.847 "adrfam": "IPv4", 00:24:31.847 "traddr": "10.0.0.2", 00:24:31.847 "trsvcid": "4420", 00:24:31.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.847 "prchk_reftag": false, 00:24:31.847 "prchk_guard": false, 00:24:31.847 "ctrlr_loss_timeout_sec": 0, 00:24:31.847 "reconnect_delay_sec": 0, 00:24:31.847 "fast_io_fail_timeout_sec": 0, 00:24:31.847 "psk": "key0", 00:24:31.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.847 "hdgst": false, 00:24:31.847 "ddgst": false, 00:24:31.847 "multipath": "multipath" 00:24:31.847 } 00:24:31.847 }, 00:24:31.847 { 00:24:31.847 "method": "bdev_nvme_set_hotplug", 00:24:31.847 "params": { 00:24:31.847 "period_us": 100000, 00:24:31.847 "enable": false 00:24:31.847 } 00:24:31.847 }, 00:24:31.847 { 00:24:31.847 "method": "bdev_enable_histogram", 00:24:31.847 "params": { 00:24:31.847 "name": "nvme0n1", 00:24:31.847 "enable": true 00:24:31.847 } 00:24:31.847 }, 00:24:31.847 { 00:24:31.847 "method": "bdev_wait_for_examine" 00:24:31.847 } 00:24:31.847 ] 00:24:31.847 }, 00:24:31.847 { 00:24:31.847 "subsystem": "nbd", 00:24:31.847 "config": [] 00:24:31.847 } 00:24:31.847 ] 00:24:31.847 }' 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3005255 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005255 ']' 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005255 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005255 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005255' 00:24:31.847 killing process with pid 3005255 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005255 00:24:31.847 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.847 00:24:31.847 Latency(us) 00:24:31.847 [2024-11-18T17:31:30.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.847 [2024-11-18T17:31:30.184Z] =================================================================================================================== 00:24:31.847 [2024-11-18T17:31:30.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.847 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005255 00:24:32.781 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3005100 00:24:32.781 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005100 ']' 00:24:32.781 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005100 00:24:32.781 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005100 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005100' 00:24:32.781 killing process with pid 3005100 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005100 00:24:32.781 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005100 00:24:34.172 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:34.172 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.172 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:34.172 "subsystems": [ 00:24:34.172 { 00:24:34.172 "subsystem": "keyring", 00:24:34.172 "config": [ 00:24:34.172 { 00:24:34.172 "method": "keyring_file_add_key", 00:24:34.172 "params": { 00:24:34.172 "name": "key0", 00:24:34.172 "path": "/tmp/tmp.0YiorwwFtI" 00:24:34.172 } 00:24:34.172 } 00:24:34.172 ] 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "subsystem": "iobuf", 00:24:34.172 "config": [ 00:24:34.172 { 00:24:34.172 "method": "iobuf_set_options", 00:24:34.172 "params": { 00:24:34.172 "small_pool_count": 8192, 00:24:34.172 "large_pool_count": 1024, 00:24:34.172 "small_bufsize": 8192, 00:24:34.172 "large_bufsize": 135168, 00:24:34.172 "enable_numa": false 00:24:34.172 } 00:24:34.172 } 00:24:34.172 ] 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "subsystem": "sock", 00:24:34.172 "config": [ 00:24:34.172 { 00:24:34.172 "method": "sock_set_default_impl", 00:24:34.172 "params": { 00:24:34.172 "impl_name": "posix" 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "sock_impl_set_options", 00:24:34.172 "params": { 00:24:34.172 "impl_name": "ssl", 00:24:34.172 "recv_buf_size": 4096, 00:24:34.172 "send_buf_size": 4096, 00:24:34.172 "enable_recv_pipe": true, 00:24:34.172 "enable_quickack": false, 00:24:34.172 "enable_placement_id": 0, 00:24:34.172 "enable_zerocopy_send_server": true, 00:24:34.172 "enable_zerocopy_send_client": false, 00:24:34.172 "zerocopy_threshold": 0, 00:24:34.172 "tls_version": 0, 00:24:34.172 "enable_ktls": false 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "sock_impl_set_options", 00:24:34.172 "params": { 00:24:34.172 "impl_name": "posix", 00:24:34.172 "recv_buf_size": 2097152, 00:24:34.172 "send_buf_size": 2097152, 00:24:34.172 "enable_recv_pipe": true, 00:24:34.172 "enable_quickack": false, 00:24:34.172 "enable_placement_id": 0, 00:24:34.172 "enable_zerocopy_send_server": true, 00:24:34.172 "enable_zerocopy_send_client": false, 00:24:34.172 "zerocopy_threshold": 0, 00:24:34.172 "tls_version": 0, 00:24:34.172 "enable_ktls": false 00:24:34.172 } 00:24:34.172 } 00:24:34.172 ] 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "subsystem": "vmd", 00:24:34.172 "config": [] 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "subsystem": "accel", 00:24:34.172 "config": [ 00:24:34.172 { 00:24:34.172 "method": "accel_set_options", 00:24:34.172 "params": { 00:24:34.172 "small_cache_size": 128, 00:24:34.172 "large_cache_size": 16, 00:24:34.172 "task_count": 2048, 00:24:34.172 "sequence_count": 2048, 00:24:34.172 "buf_count": 2048 00:24:34.172 } 00:24:34.172 } 00:24:34.172 ] 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "subsystem": "bdev", 00:24:34.172 "config": [ 00:24:34.172 { 00:24:34.172 "method": "bdev_set_options", 00:24:34.172 "params": { 00:24:34.172 "bdev_io_pool_size": 65535, 00:24:34.172 "bdev_io_cache_size": 256, 00:24:34.172 "bdev_auto_examine": true, 00:24:34.172 "iobuf_small_cache_size": 128, 00:24:34.172 "iobuf_large_cache_size": 16 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "bdev_raid_set_options", 00:24:34.172 "params": { 00:24:34.172 "process_window_size_kb": 1024, 00:24:34.172 "process_max_bandwidth_mb_sec": 0 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "bdev_iscsi_set_options", 00:24:34.172 "params": { 00:24:34.172 "timeout_sec": 30 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "bdev_nvme_set_options", 00:24:34.172 "params": { 00:24:34.172 "action_on_timeout": "none", 00:24:34.172 "timeout_us": 0, 00:24:34.172 "timeout_admin_us": 0, 00:24:34.172 "keep_alive_timeout_ms": 10000, 00:24:34.172 "arbitration_burst": 0, 00:24:34.172 "low_priority_weight": 0, 00:24:34.172 "medium_priority_weight": 0, 00:24:34.172 "high_priority_weight": 0, 00:24:34.172 "nvme_adminq_poll_period_us": 10000, 00:24:34.172 "nvme_ioq_poll_period_us": 0, 00:24:34.172 "io_queue_requests": 0, 00:24:34.172 "delay_cmd_submit": true, 00:24:34.172 "transport_retry_count": 4, 00:24:34.172 "bdev_retry_count": 3, 00:24:34.172 "transport_ack_timeout": 0, 00:24:34.172 "ctrlr_loss_timeout_sec": 0, 00:24:34.172 "reconnect_delay_sec": 0, 00:24:34.172 "fast_io_fail_timeout_sec": 0, 00:24:34.172 "disable_auto_failback": false, 00:24:34.172 "generate_uuids": false, 00:24:34.172 "transport_tos": 0, 00:24:34.172 "nvme_error_stat": false, 00:24:34.172 "rdma_srq_size": 0, 00:24:34.172 "io_path_stat": false, 00:24:34.172 "allow_accel_sequence": false, 00:24:34.172 "rdma_max_cq_size": 0, 00:24:34.172 "rdma_cm_event_timeout_ms": 0, 00:24:34.172 "dhchap_digests": [ 00:24:34.172 "sha256", 00:24:34.172 "sha384", 00:24:34.172 "sha512" 00:24:34.172 ], 00:24:34.172 "dhchap_dhgroups": [ 00:24:34.172 "null", 00:24:34.172 "ffdhe2048", 00:24:34.172 "ffdhe3072", 00:24:34.172 "ffdhe4096", 00:24:34.172 "ffdhe6144", 00:24:34.172 "ffdhe8192" 00:24:34.172 ] 00:24:34.172 } 00:24:34.172 }, 00:24:34.172 { 00:24:34.172 "method": "bdev_nvme_set_hotplug", 00:24:34.172 "params": { 00:24:34.173 "period_us": 100000, 00:24:34.173 "enable": false 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "bdev_malloc_create", 00:24:34.173 "params": { 00:24:34.173 "name": "malloc0", 00:24:34.173 "num_blocks": 8192, 00:24:34.173 "block_size": 4096, 00:24:34.173 "physical_block_size": 4096, 00:24:34.173 "uuid": "1b32e72e-0d65-4ae9-910b-a774fc4413f7", 00:24:34.173 "optimal_io_boundary": 0, 00:24:34.173 "md_size": 0, 00:24:34.173 "dif_type": 0, 00:24:34.173 "dif_is_head_of_md": false, 00:24:34.173 "dif_pi_format": 0 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "bdev_wait_for_examine" 00:24:34.173 } 00:24:34.173 ] 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "subsystem": "nbd", 00:24:34.173 "config": [] 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "subsystem": "scheduler", 00:24:34.173 "config": [ 00:24:34.173 { 00:24:34.173 "method": "framework_set_scheduler", 00:24:34.173 "params": { 00:24:34.173 "name": "static" 00:24:34.173 } 00:24:34.173 } 00:24:34.173 ] 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "subsystem": "nvmf", 00:24:34.173 "config": [ 00:24:34.173 { 00:24:34.173 "method": "nvmf_set_config", 00:24:34.173 "params": { 00:24:34.173 "discovery_filter": "match_any", 00:24:34.173 "admin_cmd_passthru": { 00:24:34.173 "identify_ctrlr": false 00:24:34.173 }, 00:24:34.173 "dhchap_digests": [ 00:24:34.173 "sha256", 00:24:34.173 "sha384", 00:24:34.173 "sha512" 00:24:34.173 ], 00:24:34.173 "dhchap_dhgroups": [ 00:24:34.173 "null", 00:24:34.173 "ffdhe2048", 00:24:34.173 "ffdhe3072", 00:24:34.173 "ffdhe4096", 00:24:34.173 "ffdhe6144", 00:24:34.173 "ffdhe8192" 00:24:34.173 ] 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_set_max_subsystems", 00:24:34.173 "params": { 00:24:34.173 "max_subsystems": 1024 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_set_crdt", 00:24:34.173 "params": { 00:24:34.173 "crdt1": 0, 00:24:34.173 "crdt2": 0, 00:24:34.173 "crdt3": 0 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_create_transport", 00:24:34.173 "params": { 00:24:34.173 "trtype": "TCP", 00:24:34.173 "max_queue_depth": 128, 00:24:34.173 "max_io_qpairs_per_ctrlr": 127, 00:24:34.173 "in_capsule_data_size": 4096, 00:24:34.173 "max_io_size": 131072, 00:24:34.173 "io_unit_size": 131072, 00:24:34.173 "max_aq_depth": 128, 00:24:34.173 "num_shared_buffers": 511, 00:24:34.173 "buf_cache_size": 4294967295, 00:24:34.173 "dif_insert_or_strip": false, 00:24:34.173 "zcopy": false, 00:24:34.173 "c2h_success": false, 00:24:34.173 "sock_priority": 0, 00:24:34.173 "abort_timeout_sec": 1, 00:24:34.173 "ack_timeout": 0, 00:24:34.173 "data_wr_pool_size": 0 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_create_subsystem", 00:24:34.173 "params": { 00:24:34.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.173 "allow_any_host": false, 00:24:34.173 "serial_number": "00000000000000000000", 00:24:34.173 "model_number": "SPDK bdev Controller", 00:24:34.173 "max_namespaces": 32, 00:24:34.173 "min_cntlid": 1, 00:24:34.173 "max_cntlid": 65519, 00:24:34.173 "ana_reporting": false 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_subsystem_add_host", 00:24:34.173 "params": { 00:24:34.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.173 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.173 "psk": "key0" 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_subsystem_add_ns", 00:24:34.173 "params": { 00:24:34.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.173 "namespace": { 00:24:34.173 "nsid": 1, 00:24:34.173 "bdev_name": "malloc0", 00:24:34.173 "nguid": "1B32E72E0D654AE9910BA774FC4413F7", 00:24:34.173 "uuid": "1b32e72e-0d65-4ae9-910b-a774fc4413f7", 00:24:34.173 "no_auto_visible": false 00:24:34.173 } 00:24:34.173 } 00:24:34.173 }, 00:24:34.173 { 00:24:34.173 "method": "nvmf_subsystem_add_listener", 00:24:34.173 "params": { 00:24:34.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.173 "listen_address": { 00:24:34.173 "trtype": "TCP", 00:24:34.173 "adrfam": "IPv4", 00:24:34.173 "traddr": "10.0.0.2", 00:24:34.173 "trsvcid": "4420" 00:24:34.173 }, 00:24:34.173 "secure_channel": false, 00:24:34.173 "sock_impl": "ssl" 00:24:34.173 } 00:24:34.173 } 00:24:34.173 ] 00:24:34.173 } 00:24:34.173 ] 00:24:34.173 }' 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005928 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005928 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005928 ']' 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.173 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.173 [2024-11-18 18:31:32.395845] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:34.173 [2024-11-18 18:31:32.395988] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.431 [2024-11-18 18:31:32.546624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.431 [2024-11-18 18:31:32.683716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.431 [2024-11-18 18:31:32.683821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.431 [2024-11-18 18:31:32.683847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.431 [2024-11-18 18:31:32.683872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.431 [2024-11-18 18:31:32.683892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.431 [2024-11-18 18:31:32.685647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.998 [2024-11-18 18:31:33.220015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.998 [2024-11-18 18:31:33.252083] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:34.998 [2024-11-18 18:31:33.252417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3006079 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3006079 /var/tmp/bdevperf.sock 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006079 ']' 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.256 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:35.256 "subsystems": [ 00:24:35.256 { 00:24:35.256 "subsystem": "keyring", 00:24:35.256 "config": [ 00:24:35.256 { 00:24:35.256 "method": "keyring_file_add_key", 00:24:35.256 "params": { 00:24:35.256 "name": "key0", 00:24:35.256 "path": "/tmp/tmp.0YiorwwFtI" 00:24:35.256 } 00:24:35.256 } 00:24:35.256 ] 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "subsystem": "iobuf", 00:24:35.256 "config": [ 00:24:35.256 { 00:24:35.256 "method": "iobuf_set_options", 00:24:35.256 "params": { 00:24:35.256 "small_pool_count": 8192, 00:24:35.256 "large_pool_count": 1024, 00:24:35.256 "small_bufsize": 8192, 00:24:35.256 "large_bufsize": 135168, 00:24:35.256 "enable_numa": false 00:24:35.256 } 00:24:35.256 } 00:24:35.256 ] 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "subsystem": "sock", 00:24:35.256 "config": [ 00:24:35.256 { 00:24:35.256 "method": "sock_set_default_impl", 00:24:35.256 "params": { 00:24:35.256 "impl_name": "posix" 00:24:35.256 } 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "method": "sock_impl_set_options", 00:24:35.256 "params": { 00:24:35.256 "impl_name": "ssl", 00:24:35.256 "recv_buf_size": 4096, 00:24:35.256 "send_buf_size": 4096, 00:24:35.256 "enable_recv_pipe": true, 00:24:35.256 "enable_quickack": false, 00:24:35.256 "enable_placement_id": 0, 00:24:35.256 "enable_zerocopy_send_server": true, 00:24:35.256 "enable_zerocopy_send_client": false, 00:24:35.256 "zerocopy_threshold": 0, 00:24:35.256 "tls_version": 0, 00:24:35.256 "enable_ktls": false 00:24:35.256 } 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "method": "sock_impl_set_options", 00:24:35.256 "params": { 00:24:35.256 "impl_name": "posix", 00:24:35.256 "recv_buf_size": 2097152, 00:24:35.256 "send_buf_size": 2097152, 00:24:35.256 "enable_recv_pipe": true, 00:24:35.256 "enable_quickack": false, 00:24:35.256 "enable_placement_id": 0, 00:24:35.256 "enable_zerocopy_send_server": true, 00:24:35.256 "enable_zerocopy_send_client": false, 00:24:35.256 "zerocopy_threshold": 0, 00:24:35.256 "tls_version": 0, 00:24:35.256 "enable_ktls": false 00:24:35.256 } 00:24:35.256 } 00:24:35.256 ] 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "subsystem": "vmd", 00:24:35.256 "config": [] 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "subsystem": "accel", 00:24:35.256 "config": [ 00:24:35.256 { 00:24:35.256 "method": "accel_set_options", 00:24:35.256 "params": { 00:24:35.256 "small_cache_size": 128, 00:24:35.256 "large_cache_size": 16, 00:24:35.256 "task_count": 2048, 00:24:35.256 "sequence_count": 2048, 00:24:35.256 "buf_count": 2048 00:24:35.256 } 00:24:35.256 } 00:24:35.256 ] 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "subsystem": "bdev", 00:24:35.256 "config": [ 00:24:35.256 { 00:24:35.256 "method": "bdev_set_options", 00:24:35.256 "params": { 00:24:35.256 "bdev_io_pool_size": 65535, 00:24:35.256 "bdev_io_cache_size": 256, 00:24:35.256 "bdev_auto_examine": true, 00:24:35.256 "iobuf_small_cache_size": 128, 00:24:35.256 "iobuf_large_cache_size": 16 00:24:35.256 } 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "method": "bdev_raid_set_options", 00:24:35.256 "params": { 00:24:35.256 "process_window_size_kb": 1024, 00:24:35.256 "process_max_bandwidth_mb_sec": 0 00:24:35.256 } 00:24:35.256 }, 00:24:35.256 { 00:24:35.256 "method": "bdev_iscsi_set_options", 00:24:35.256 "params": { 00:24:35.256 "timeout_sec": 30 00:24:35.256 } 00:24:35.256 }, 00:24:35.256 { 00:24:35.257 "method": "bdev_nvme_set_options", 00:24:35.257 "params": { 00:24:35.257 "action_on_timeout": "none", 00:24:35.257 "timeout_us": 0, 00:24:35.257 "timeout_admin_us": 0, 00:24:35.257 "keep_alive_timeout_ms": 10000, 00:24:35.257 "arbitration_burst": 0, 00:24:35.257 "low_priority_weight": 0, 00:24:35.257 "medium_priority_weight": 0, 00:24:35.257 "high_priority_weight": 0, 00:24:35.257 "nvme_adminq_poll_period_us": 10000, 00:24:35.257 "nvme_ioq_poll_period_us": 0, 00:24:35.257 "io_queue_requests": 512, 00:24:35.257 "delay_cmd_submit": true, 00:24:35.257 "transport_retry_count": 4, 00:24:35.257 "bdev_retry_count": 3, 00:24:35.257 "transport_ack_timeout": 0, 00:24:35.257 "ctrlr_loss_timeout_sec": 0, 00:24:35.257 "reconnect_delay_sec": 0, 00:24:35.257 "fast_io_fail_timeout_sec": 0, 00:24:35.257 "disable_auto_failback": false, 00:24:35.257 "generate_uuids": false, 00:24:35.257 "transport_tos": 0, 00:24:35.257 "nvme_error_stat": false, 00:24:35.257 "rdma_srq_size": 0, 00:24:35.257 "io_path_stat": false, 00:24:35.257 "allow_accel_sequence": false, 00:24:35.257 "rdma_max_cq_size": 0, 00:24:35.257 "rdma_cm_event_timeout_ms": 0, 00:24:35.257 "dhchap_digests": [ 00:24:35.257 "sha256", 00:24:35.257 "sha384", 00:24:35.257 "sha512" 00:24:35.257 ], 00:24:35.257 "dhchap_dhgroups": [ 00:24:35.257 "null", 00:24:35.257 "ffdhe2048", 00:24:35.257 "ffdhe3072", 00:24:35.257 "ffdhe4096", 00:24:35.257 "ffdhe6144", 00:24:35.257 "ffdhe8192" 00:24:35.257 ] 00:24:35.257 } 00:24:35.257 }, 00:24:35.257 { 00:24:35.257 "method": "bdev_nvme_attach_controller", 00:24:35.257 "params": { 00:24:35.257 "name": "nvme0", 00:24:35.257 "trtype": "TCP", 00:24:35.257 "adrfam": "IPv4", 00:24:35.257 "traddr": "10.0.0.2", 00:24:35.257 "trsvcid": "4420", 00:24:35.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.257 "prchk_reftag": false, 00:24:35.257 "prchk_guard": false, 00:24:35.257 "ctrlr_loss_timeout_sec": 0, 00:24:35.257 "reconnect_delay_sec": 0, 00:24:35.257 "fast_io_fail_timeout_sec": 0, 00:24:35.257 "psk": "key0", 00:24:35.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.257 "hdgst": false, 00:24:35.257 "ddgst": false, 00:24:35.257 "multipath": "multipath" 00:24:35.257 } 00:24:35.257 }, 00:24:35.257 { 00:24:35.257 "method": "bdev_nvme_set_hotplug", 00:24:35.257 "params": { 00:24:35.257 "period_us": 100000, 00:24:35.257 "enable": false 00:24:35.257 } 00:24:35.257 }, 00:24:35.257 { 00:24:35.257 "method": "bdev_enable_histogram", 00:24:35.257 "params": { 00:24:35.257 "name": "nvme0n1", 00:24:35.257 "enable": true 00:24:35.257 } 00:24:35.257 }, 00:24:35.257 { 00:24:35.257 "method": "bdev_wait_for_examine" 00:24:35.257 } 00:24:35.257 ] 00:24:35.257 }, 00:24:35.257 { 00:24:35.257 "subsystem": "nbd", 00:24:35.257 "config": [] 00:24:35.257 } 00:24:35.257 ] 00:24:35.257 }' 00:24:35.257 [2024-11-18 18:31:33.443359] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:35.257 [2024-11-18 18:31:33.443492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006079 ] 00:24:35.257 [2024-11-18 18:31:33.576911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.515 [2024-11-18 18:31:33.705874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.081 [2024-11-18 18:31:34.149844] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.338 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.338 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.338 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.338 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:36.596 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.596 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.596 Running I/O for 1 seconds... 00:24:37.529 2593.00 IOPS, 10.13 MiB/s 00:24:37.529 Latency(us) 00:24:37.529 [2024-11-18T17:31:35.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.529 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:37.529 Verification LBA range: start 0x0 length 0x2000 00:24:37.529 nvme0n1 : 1.03 2649.56 10.35 0.00 0.00 47760.21 9903.22 37282.70 00:24:37.529 [2024-11-18T17:31:35.866Z] =================================================================================================================== 00:24:37.529 [2024-11-18T17:31:35.866Z] Total : 2649.56 10.35 0.00 0.00 47760.21 9903.22 37282.70 00:24:37.529 { 00:24:37.529 "results": [ 00:24:37.529 { 00:24:37.529 "job": "nvme0n1", 00:24:37.529 "core_mask": "0x2", 00:24:37.529 "workload": "verify", 00:24:37.529 "status": "finished", 00:24:37.529 "verify_range": { 00:24:37.529 "start": 0, 00:24:37.529 "length": 8192 00:24:37.529 }, 00:24:37.529 "queue_depth": 128, 00:24:37.529 "io_size": 4096, 00:24:37.529 "runtime": 1.026962, 00:24:37.529 "iops": 2649.5624959832985, 00:24:37.529 "mibps": 10.34985349993476, 00:24:37.529 "io_failed": 0, 00:24:37.529 "io_timeout": 0, 00:24:37.529 "avg_latency_us": 47760.210789606215, 00:24:37.529 "min_latency_us": 9903.217777777778, 00:24:37.529 "max_latency_us": 37282.70222222222 00:24:37.529 } 00:24:37.529 ], 00:24:37.529 "core_count": 1 00:24:37.529 } 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:37.529 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:37.529 nvmf_trace.0 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3006079 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006079 ']' 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006079 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006079 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006079' 00:24:37.787 killing process with pid 3006079 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006079 00:24:37.787 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.787 00:24:37.787 Latency(us) 00:24:37.787 [2024-11-18T17:31:36.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.787 [2024-11-18T17:31:36.124Z] =================================================================================================================== 00:24:37.787 [2024-11-18T17:31:36.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.787 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006079 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.721 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.721 rmmod nvme_tcp 00:24:38.721 rmmod nvme_fabrics 00:24:38.721 rmmod nvme_keyring 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3005928 ']' 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3005928 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005928 ']' 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005928 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005928 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005928' 00:24:38.722 killing process with pid 3005928 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005928 00:24:38.722 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005928 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.095 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.050 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.050 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QvcQ6MksAo /tmp/tmp.r2wH3ws8uW /tmp/tmp.0YiorwwFtI 00:24:42.050 00:24:42.050 real 1m53.218s 00:24:42.050 user 3m11.186s 00:24:42.050 sys 0m25.542s 00:24:42.050 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.050 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.050 ************************************ 00:24:42.050 END TEST nvmf_tls 00:24:42.050 ************************************ 00:24:42.051 18:31:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:42.051 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.051 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.051 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:42.051 ************************************ 00:24:42.051 START TEST nvmf_fips 00:24:42.051 ************************************ 00:24:42.051 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:42.309 * Looking for test storage... 00:24:42.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:42.309 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:42.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.310 --rc genhtml_branch_coverage=1 00:24:42.310 --rc genhtml_function_coverage=1 00:24:42.310 --rc genhtml_legend=1 00:24:42.310 --rc geninfo_all_blocks=1 00:24:42.310 --rc geninfo_unexecuted_blocks=1 00:24:42.310 00:24:42.310 ' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:42.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.310 --rc genhtml_branch_coverage=1 00:24:42.310 --rc genhtml_function_coverage=1 00:24:42.310 --rc genhtml_legend=1 00:24:42.310 --rc geninfo_all_blocks=1 00:24:42.310 --rc geninfo_unexecuted_blocks=1 00:24:42.310 00:24:42.310 ' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:42.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.310 --rc genhtml_branch_coverage=1 00:24:42.310 --rc genhtml_function_coverage=1 00:24:42.310 --rc genhtml_legend=1 00:24:42.310 --rc geninfo_all_blocks=1 00:24:42.310 --rc geninfo_unexecuted_blocks=1 00:24:42.310 00:24:42.310 ' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:42.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.310 --rc genhtml_branch_coverage=1 00:24:42.310 --rc genhtml_function_coverage=1 00:24:42.310 --rc genhtml_legend=1 00:24:42.310 --rc geninfo_all_blocks=1 00:24:42.310 --rc geninfo_unexecuted_blocks=1 00:24:42.310 00:24:42.310 ' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.310 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:42.311 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:42.599 Error setting digest 00:24:42.599 40123666227F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:42.599 40123666227F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:42.599 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:42.599 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.599 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.600 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.501 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:44.502 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:44.502 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:44.502 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:44.502 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.502 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:24:44.761 00:24:44.761 --- 10.0.0.2 ping statistics --- 00:24:44.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.761 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:24:44.761 00:24:44.761 --- 10.0.0.1 ping statistics --- 00:24:44.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.761 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3008594 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3008594 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3008594 ']' 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.761 18:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:44.761 [2024-11-18 18:31:43.045591] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:44.761 [2024-11-18 18:31:43.045747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.020 [2024-11-18 18:31:43.197144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.020 [2024-11-18 18:31:43.336387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.020 [2024-11-18 18:31:43.336475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.020 [2024-11-18 18:31:43.336500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.020 [2024-11-18 18:31:43.336524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.020 [2024-11-18 18:31:43.336550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.020 [2024-11-18 18:31:43.338222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.n7n 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.n7n 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.n7n 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.n7n 00:24:45.953 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:46.211 [2024-11-18 18:31:44.396282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.211 [2024-11-18 18:31:44.412228] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.211 [2024-11-18 18:31:44.412556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.211 malloc0 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3008865 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3008865 /var/tmp/bdevperf.sock 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3008865 ']' 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.211 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:46.469 [2024-11-18 18:31:44.625878] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:46.469 [2024-11-18 18:31:44.626028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008865 ] 00:24:46.469 [2024-11-18 18:31:44.767478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.727 [2024-11-18 18:31:44.909098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.293 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.293 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:47.293 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.n7n 00:24:47.858 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.116 [2024-11-18 18:31:46.196687] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.116 TLSTESTn1 00:24:48.116 18:31:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.116 Running I/O for 10 seconds... 00:24:50.421 2335.00 IOPS, 9.12 MiB/s [2024-11-18T17:31:49.690Z] 2402.50 IOPS, 9.38 MiB/s [2024-11-18T17:31:50.624Z] 2425.00 IOPS, 9.47 MiB/s [2024-11-18T17:31:51.558Z] 2423.00 IOPS, 9.46 MiB/s [2024-11-18T17:31:52.491Z] 2419.20 IOPS, 9.45 MiB/s [2024-11-18T17:31:53.865Z] 2423.83 IOPS, 9.47 MiB/s [2024-11-18T17:31:54.798Z] 2421.86 IOPS, 9.46 MiB/s [2024-11-18T17:31:55.732Z] 2426.25 IOPS, 9.48 MiB/s [2024-11-18T17:31:56.666Z] 2425.56 IOPS, 9.47 MiB/s [2024-11-18T17:31:56.666Z] 2429.90 IOPS, 9.49 MiB/s 00:24:58.329 Latency(us) 00:24:58.329 [2024-11-18T17:31:56.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.329 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:58.329 Verification LBA range: start 0x0 length 0x2000 00:24:58.329 TLSTESTn1 : 10.04 2433.73 9.51 0.00 0.00 52471.35 9223.59 40777.96 00:24:58.329 [2024-11-18T17:31:56.666Z] =================================================================================================================== 00:24:58.329 [2024-11-18T17:31:56.666Z] Total : 2433.73 9.51 0.00 0.00 52471.35 9223.59 40777.96 00:24:58.329 { 00:24:58.329 "results": [ 00:24:58.329 { 00:24:58.329 "job": "TLSTESTn1", 00:24:58.329 "core_mask": "0x4", 00:24:58.329 "workload": "verify", 00:24:58.329 "status": "finished", 00:24:58.329 "verify_range": { 00:24:58.329 "start": 0, 00:24:58.329 "length": 8192 00:24:58.329 }, 00:24:58.329 "queue_depth": 128, 00:24:58.329 "io_size": 4096, 00:24:58.329 "runtime": 10.036038, 00:24:58.329 "iops": 2433.7293262540456, 00:24:58.329 "mibps": 9.506755180679866, 00:24:58.329 "io_failed": 0, 00:24:58.329 "io_timeout": 0, 00:24:58.329 "avg_latency_us": 52471.351817491195, 00:24:58.329 "min_latency_us": 9223.585185185186, 00:24:58.329 "max_latency_us": 40777.955555555556 00:24:58.329 } 00:24:58.329 ], 00:24:58.329 "core_count": 1 00:24:58.329 } 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:58.329 nvmf_trace.0 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3008865 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3008865 ']' 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3008865 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3008865 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3008865' 00:24:58.329 killing process with pid 3008865 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3008865 00:24:58.329 Received shutdown signal, test time was about 10.000000 seconds 00:24:58.329 00:24:58.329 Latency(us) 00:24:58.329 [2024-11-18T17:31:56.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.329 [2024-11-18T17:31:56.666Z] =================================================================================================================== 00:24:58.329 [2024-11-18T17:31:56.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.329 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3008865 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.262 rmmod nvme_tcp 00:24:59.262 rmmod nvme_fabrics 00:24:59.262 rmmod nvme_keyring 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3008594 ']' 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3008594 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3008594 ']' 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3008594 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3008594 00:24:59.262 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.263 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.263 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3008594' 00:24:59.263 killing process with pid 3008594 00:24:59.263 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3008594 00:24:59.263 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3008594 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.636 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.n7n 00:25:03.164 00:25:03.164 real 0m20.556s 00:25:03.164 user 0m28.623s 00:25:03.164 sys 0m5.187s 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.164 ************************************ 00:25:03.164 END TEST nvmf_fips 00:25:03.164 ************************************ 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:03.164 ************************************ 00:25:03.164 START TEST nvmf_control_msg_list 00:25:03.164 ************************************ 00:25:03.164 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:03.164 * Looking for test storage... 00:25:03.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.164 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.165 --rc genhtml_branch_coverage=1 00:25:03.165 --rc genhtml_function_coverage=1 00:25:03.165 --rc genhtml_legend=1 00:25:03.165 --rc geninfo_all_blocks=1 00:25:03.165 --rc geninfo_unexecuted_blocks=1 00:25:03.165 00:25:03.165 ' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.165 --rc genhtml_branch_coverage=1 00:25:03.165 --rc genhtml_function_coverage=1 00:25:03.165 --rc genhtml_legend=1 00:25:03.165 --rc geninfo_all_blocks=1 00:25:03.165 --rc geninfo_unexecuted_blocks=1 00:25:03.165 00:25:03.165 ' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.165 --rc genhtml_branch_coverage=1 00:25:03.165 --rc genhtml_function_coverage=1 00:25:03.165 --rc genhtml_legend=1 00:25:03.165 --rc geninfo_all_blocks=1 00:25:03.165 --rc geninfo_unexecuted_blocks=1 00:25:03.165 00:25:03.165 ' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.165 --rc genhtml_branch_coverage=1 00:25:03.165 --rc genhtml_function_coverage=1 00:25:03.165 --rc genhtml_legend=1 00:25:03.165 --rc geninfo_all_blocks=1 00:25:03.165 --rc geninfo_unexecuted_blocks=1 00:25:03.165 00:25:03.165 ' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.165 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.166 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.166 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.067 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:25:05.068 00:25:05.068 --- 10.0.0.2 ping statistics --- 00:25:05.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.068 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:25:05.068 00:25:05.068 --- 10.0.0.1 ping statistics --- 00:25:05.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.068 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.068 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3012398 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3012398 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3012398 ']' 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.326 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.326 [2024-11-18 18:32:03.511318] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:05.326 [2024-11-18 18:32:03.511477] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.326 [2024-11-18 18:32:03.658762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.584 [2024-11-18 18:32:03.793032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.584 [2024-11-18 18:32:03.793116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.584 [2024-11-18 18:32:03.793144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.584 [2024-11-18 18:32:03.793169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.584 [2024-11-18 18:32:03.793189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.584 [2024-11-18 18:32:03.794846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 [2024-11-18 18:32:04.516876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 Malloc0 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.518 [2024-11-18 18:32:04.586595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3012549 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3012550 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3012551 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.518 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3012549 00:25:06.518 [2024-11-18 18:32:04.706454] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:06.518 [2024-11-18 18:32:04.706920] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:06.518 [2024-11-18 18:32:04.715336] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.892 Initializing NVMe Controllers 00:25:07.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:07.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:07.892 Initialization complete. Launching workers. 00:25:07.892 ======================================================== 00:25:07.892 Latency(us) 00:25:07.892 Device Information : IOPS MiB/s Average min max 00:25:07.892 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4226.00 16.51 236.00 210.01 1257.07 00:25:07.892 ======================================================== 00:25:07.892 Total : 4226.00 16.51 236.00 210.01 1257.07 00:25:07.892 00:25:07.892 Initializing NVMe Controllers 00:25:07.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:07.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:07.892 Initialization complete. Launching workers. 00:25:07.892 ======================================================== 00:25:07.892 Latency(us) 00:25:07.892 Device Information : IOPS MiB/s Average min max 00:25:07.892 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 121.00 0.47 8464.81 322.29 42170.82 00:25:07.892 ======================================================== 00:25:07.892 Total : 121.00 0.47 8464.81 322.29 42170.82 00:25:07.892 00:25:07.892 Initializing NVMe Controllers 00:25:07.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:07.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:07.892 Initialization complete. Launching workers. 00:25:07.892 ======================================================== 00:25:07.892 Latency(us) 00:25:07.892 Device Information : IOPS MiB/s Average min max 00:25:07.892 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40962.02 40456.20 41957.18 00:25:07.892 ======================================================== 00:25:07.892 Total : 25.00 0.10 40962.02 40456.20 41957.18 00:25:07.892 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3012550 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3012551 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.892 rmmod nvme_tcp 00:25:07.892 rmmod nvme_fabrics 00:25:07.892 rmmod nvme_keyring 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3012398 ']' 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3012398 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3012398 ']' 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3012398 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.892 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3012398 00:25:07.892 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.892 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.892 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3012398' 00:25:07.892 killing process with pid 3012398 00:25:07.892 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3012398 00:25:07.892 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3012398 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.272 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.232 00:25:11.232 real 0m8.365s 00:25:11.232 user 0m7.780s 00:25:11.232 sys 0m2.906s 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.232 ************************************ 00:25:11.232 END TEST nvmf_control_msg_list 00:25:11.232 ************************************ 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.232 ************************************ 00:25:11.232 START TEST nvmf_wait_for_buf 00:25:11.232 ************************************ 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.232 * Looking for test storage... 00:25:11.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.232 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.233 --rc genhtml_branch_coverage=1 00:25:11.233 --rc genhtml_function_coverage=1 00:25:11.233 --rc genhtml_legend=1 00:25:11.233 --rc geninfo_all_blocks=1 00:25:11.233 --rc geninfo_unexecuted_blocks=1 00:25:11.233 00:25:11.233 ' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.233 --rc genhtml_branch_coverage=1 00:25:11.233 --rc genhtml_function_coverage=1 00:25:11.233 --rc genhtml_legend=1 00:25:11.233 --rc geninfo_all_blocks=1 00:25:11.233 --rc geninfo_unexecuted_blocks=1 00:25:11.233 00:25:11.233 ' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.233 --rc genhtml_branch_coverage=1 00:25:11.233 --rc genhtml_function_coverage=1 00:25:11.233 --rc genhtml_legend=1 00:25:11.233 --rc geninfo_all_blocks=1 00:25:11.233 --rc geninfo_unexecuted_blocks=1 00:25:11.233 00:25:11.233 ' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.233 --rc genhtml_branch_coverage=1 00:25:11.233 --rc genhtml_function_coverage=1 00:25:11.233 --rc genhtml_legend=1 00:25:11.233 --rc geninfo_all_blocks=1 00:25:11.233 --rc geninfo_unexecuted_blocks=1 00:25:11.233 00:25:11.233 ' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:11.233 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.234 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.234 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.234 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.234 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.234 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.493 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.399 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.400 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.400 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.400 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:25:13.659 00:25:13.659 --- 10.0.0.2 ping statistics --- 00:25:13.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.659 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:13.659 00:25:13.659 --- 10.0.0.1 ping statistics --- 00:25:13.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.659 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3014774 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3014774 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3014774 ']' 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.659 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.659 [2024-11-18 18:32:11.884383] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:13.659 [2024-11-18 18:32:11.884533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.917 [2024-11-18 18:32:12.053069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.917 [2024-11-18 18:32:12.194846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.917 [2024-11-18 18:32:12.194934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.917 [2024-11-18 18:32:12.194959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.917 [2024-11-18 18:32:12.194984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.917 [2024-11-18 18:32:12.195003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.917 [2024-11-18 18:32:12.196638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.852 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.852 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.852 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:14.852 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.852 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.110 Malloc0 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.110 [2024-11-18 18:32:13.193282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.110 [2024-11-18 18:32:13.217581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.110 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.110 [2024-11-18 18:32:13.383823] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:17.009 Initializing NVMe Controllers 00:25:17.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:17.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:17.009 Initialization complete. Launching workers. 00:25:17.009 ======================================================== 00:25:17.009 Latency(us) 00:25:17.009 Device Information : IOPS MiB/s Average min max 00:25:17.009 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 103.00 12.88 40527.85 23991.88 111733.41 00:25:17.009 ======================================================== 00:25:17.009 Total : 103.00 12.88 40527.85 23991.88 111733.41 00:25:17.009 00:25:17.009 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:17.009 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:17.009 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.009 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1622 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1622 -eq 0 ]] 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.009 rmmod nvme_tcp 00:25:17.009 rmmod nvme_fabrics 00:25:17.009 rmmod nvme_keyring 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3014774 ']' 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3014774 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3014774 ']' 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3014774 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3014774 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3014774' 00:25:17.009 killing process with pid 3014774 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3014774 00:25:17.009 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3014774 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.943 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:17.944 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.944 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.944 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.477 00:25:20.477 real 0m8.873s 00:25:20.477 user 0m5.420s 00:25:20.477 sys 0m2.234s 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.477 ************************************ 00:25:20.477 END TEST nvmf_wait_for_buf 00:25:20.477 ************************************ 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.477 18:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.477 ************************************ 00:25:20.478 START TEST nvmf_fuzz 00:25:20.478 ************************************ 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:20.478 * Looking for test storage... 00:25:20.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.478 --rc genhtml_branch_coverage=1 00:25:20.478 --rc genhtml_function_coverage=1 00:25:20.478 --rc genhtml_legend=1 00:25:20.478 --rc geninfo_all_blocks=1 00:25:20.478 --rc geninfo_unexecuted_blocks=1 00:25:20.478 00:25:20.478 ' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.478 --rc genhtml_branch_coverage=1 00:25:20.478 --rc genhtml_function_coverage=1 00:25:20.478 --rc genhtml_legend=1 00:25:20.478 --rc geninfo_all_blocks=1 00:25:20.478 --rc geninfo_unexecuted_blocks=1 00:25:20.478 00:25:20.478 ' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.478 --rc genhtml_branch_coverage=1 00:25:20.478 --rc genhtml_function_coverage=1 00:25:20.478 --rc genhtml_legend=1 00:25:20.478 --rc geninfo_all_blocks=1 00:25:20.478 --rc geninfo_unexecuted_blocks=1 00:25:20.478 00:25:20.478 ' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.478 --rc genhtml_branch_coverage=1 00:25:20.478 --rc genhtml_function_coverage=1 00:25:20.478 --rc genhtml_legend=1 00:25:20.478 --rc geninfo_all_blocks=1 00:25:20.478 --rc geninfo_unexecuted_blocks=1 00:25:20.478 00:25:20.478 ' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.478 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.479 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.383 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:25:22.384 00:25:22.384 --- 10.0.0.2 ping statistics --- 00:25:22.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.384 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:22.384 00:25:22.384 --- 10.0.0.1 ping statistics --- 00:25:22.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.384 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3017258 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3017258 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3017258 ']' 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.384 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.314 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.315 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.571 Malloc0 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.571 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:23.572 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:55.635 Fuzzing completed. Shutting down the fuzz application 00:25:55.635 00:25:55.635 Dumping successful admin opcodes: 00:25:55.635 8, 9, 10, 24, 00:25:55.635 Dumping successful io opcodes: 00:25:55.635 0, 9, 00:25:55.635 NS: 0x2000008efec0 I/O qp, Total commands completed: 322787, total successful commands: 1904, random_seed: 847424768 00:25:55.635 NS: 0x2000008efec0 admin qp, Total commands completed: 40656, total successful commands: 332, random_seed: 4150340288 00:25:55.635 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:56.200 Fuzzing completed. Shutting down the fuzz application 00:25:56.200 00:25:56.201 Dumping successful admin opcodes: 00:25:56.201 24, 00:25:56.201 Dumping successful io opcodes: 00:25:56.201 00:25:56.201 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2859801572 00:25:56.201 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2860017592 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.201 rmmod nvme_tcp 00:25:56.201 rmmod nvme_fabrics 00:25:56.201 rmmod nvme_keyring 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3017258 ']' 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3017258 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3017258 ']' 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3017258 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017258 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017258' 00:25:56.201 killing process with pid 3017258 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3017258 00:25:56.201 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3017258 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.576 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:00.106 00:26:00.106 real 0m39.579s 00:26:00.106 user 0m57.312s 00:26:00.106 sys 0m13.068s 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.106 ************************************ 00:26:00.106 END TEST nvmf_fuzz 00:26:00.106 ************************************ 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.106 ************************************ 00:26:00.106 START TEST nvmf_multiconnection 00:26:00.106 ************************************ 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:00.106 * Looking for test storage... 00:26:00.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.106 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:00.106 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:00.106 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.106 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.106 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.106 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.107 --rc genhtml_branch_coverage=1 00:26:00.107 --rc genhtml_function_coverage=1 00:26:00.107 --rc genhtml_legend=1 00:26:00.107 --rc geninfo_all_blocks=1 00:26:00.107 --rc geninfo_unexecuted_blocks=1 00:26:00.107 00:26:00.107 ' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.107 --rc genhtml_branch_coverage=1 00:26:00.107 --rc genhtml_function_coverage=1 00:26:00.107 --rc genhtml_legend=1 00:26:00.107 --rc geninfo_all_blocks=1 00:26:00.107 --rc geninfo_unexecuted_blocks=1 00:26:00.107 00:26:00.107 ' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.107 --rc genhtml_branch_coverage=1 00:26:00.107 --rc genhtml_function_coverage=1 00:26:00.107 --rc genhtml_legend=1 00:26:00.107 --rc geninfo_all_blocks=1 00:26:00.107 --rc geninfo_unexecuted_blocks=1 00:26:00.107 00:26:00.107 ' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.107 --rc genhtml_branch_coverage=1 00:26:00.107 --rc genhtml_function_coverage=1 00:26:00.107 --rc genhtml_legend=1 00:26:00.107 --rc geninfo_all_blocks=1 00:26:00.107 --rc geninfo_unexecuted_blocks=1 00:26:00.107 00:26:00.107 ' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.107 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.108 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:02.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:02.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.014 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:02.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:02.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:26:02.015 00:26:02.015 --- 10.0.0.2 ping statistics --- 00:26:02.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.015 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:02.015 00:26:02.015 --- 10.0.0.1 ping statistics --- 00:26:02.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.015 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.015 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3023281 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3023281 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3023281 ']' 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.273 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.274 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.274 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.274 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.274 [2024-11-18 18:33:00.467678] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:02.274 [2024-11-18 18:33:00.467822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.532 [2024-11-18 18:33:00.622416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.532 [2024-11-18 18:33:00.750244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.532 [2024-11-18 18:33:00.750310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.532 [2024-11-18 18:33:00.750331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.532 [2024-11-18 18:33:00.750351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.532 [2024-11-18 18:33:00.750367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.532 [2024-11-18 18:33:00.752836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.532 [2024-11-18 18:33:00.752872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.532 [2024-11-18 18:33:00.752912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.532 [2024-11-18 18:33:00.752895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.097 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.097 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:03.097 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.097 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.097 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 [2024-11-18 18:33:01.451047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 Malloc1 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 [2024-11-18 18:33:01.574954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 Malloc2 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.356 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.357 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:03.357 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.357 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 Malloc3 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 Malloc4 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.616 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 Malloc5 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 Malloc6 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:03.874 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 Malloc7 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.875 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 Malloc8 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 Malloc9 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.133 Malloc10 00:26:04.133 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 Malloc11 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.392 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:04.958 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:04.958 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.958 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.958 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.958 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.593 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:07.851 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:07.851 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:07.851 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.851 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:07.851 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:09.749 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:09.749 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:09.749 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:09.749 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:09.749 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.749 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:09.749 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.749 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:10.682 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:10.682 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:10.683 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.683 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:10.683 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.581 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:13.146 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:13.147 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:13.147 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.147 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:13.147 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.674 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.674 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.675 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:15.933 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:15.933 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.933 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.933 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.933 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:18.462 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.463 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:18.721 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:18.721 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.721 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.721 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.721 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.250 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:21.508 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:21.508 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.508 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.508 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.508 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.407 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.407 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.407 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:23.665 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.665 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.665 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:23.665 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.665 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:24.233 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:24.233 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.233 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.233 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.233 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.760 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:27.017 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:27.017 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.017 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.017 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.017 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.544 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:30.109 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:30.109 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:30.109 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.109 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:30.109 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.010 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:32.944 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:32.944 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:32.944 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.944 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:32.944 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:34.842 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:34.842 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:34.842 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:34.842 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:34.842 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.842 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:34.842 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:34.842 [global] 00:26:34.842 thread=1 00:26:34.842 invalidate=1 00:26:34.842 rw=read 00:26:34.842 time_based=1 00:26:34.842 runtime=10 00:26:34.842 ioengine=libaio 00:26:34.842 direct=1 00:26:34.842 bs=262144 00:26:34.842 iodepth=64 00:26:34.842 norandommap=1 00:26:34.842 numjobs=1 00:26:34.842 00:26:34.842 [job0] 00:26:34.842 filename=/dev/nvme0n1 00:26:34.842 [job1] 00:26:34.842 filename=/dev/nvme10n1 00:26:34.842 [job2] 00:26:34.842 filename=/dev/nvme1n1 00:26:34.842 [job3] 00:26:34.842 filename=/dev/nvme2n1 00:26:34.842 [job4] 00:26:34.842 filename=/dev/nvme3n1 00:26:34.842 [job5] 00:26:34.842 filename=/dev/nvme4n1 00:26:34.842 [job6] 00:26:34.842 filename=/dev/nvme5n1 00:26:34.842 [job7] 00:26:34.842 filename=/dev/nvme6n1 00:26:34.842 [job8] 00:26:34.842 filename=/dev/nvme7n1 00:26:34.842 [job9] 00:26:34.842 filename=/dev/nvme8n1 00:26:34.842 [job10] 00:26:34.842 filename=/dev/nvme9n1 00:26:34.842 Could not set queue depth (nvme0n1) 00:26:34.842 Could not set queue depth (nvme10n1) 00:26:34.842 Could not set queue depth (nvme1n1) 00:26:34.842 Could not set queue depth (nvme2n1) 00:26:34.842 Could not set queue depth (nvme3n1) 00:26:34.842 Could not set queue depth (nvme4n1) 00:26:34.842 Could not set queue depth (nvme5n1) 00:26:34.842 Could not set queue depth (nvme6n1) 00:26:34.842 Could not set queue depth (nvme7n1) 00:26:34.842 Could not set queue depth (nvme8n1) 00:26:34.842 Could not set queue depth (nvme9n1) 00:26:35.100 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.100 fio-3.35 00:26:35.100 Starting 11 threads 00:26:47.316 00:26:47.316 job0: (groupid=0, jobs=1): err= 0: pid=3028240: Mon Nov 18 18:33:43 2024 00:26:47.316 read: IOPS=459, BW=115MiB/s (121MB/s)(1171MiB/10180msec) 00:26:47.316 slat (usec): min=8, max=498503, avg=1770.25, stdev=14051.14 00:26:47.316 clat (usec): min=1127, max=941618, avg=137303.16, stdev=166090.54 00:26:47.316 lat (usec): min=1156, max=1132.6k, avg=139073.41, stdev=168429.41 00:26:47.316 clat percentiles (msec): 00:26:47.316 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 46], 00:26:47.316 | 30.00th=[ 58], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 97], 00:26:47.316 | 70.00th=[ 115], 80.00th=[ 161], 90.00th=[ 317], 95.00th=[ 575], 00:26:47.316 | 99.00th=[ 802], 99.50th=[ 894], 99.90th=[ 927], 99.95th=[ 936], 00:26:47.316 | 99.99th=[ 944] 00:26:47.316 bw ( KiB/s): min= 7680, max=282112, per=15.01%, avg=118200.65, stdev=73363.27, samples=20 00:26:47.316 iops : min= 30, max= 1102, avg=461.70, stdev=286.55, samples=20 00:26:47.316 lat (msec) : 2=0.66%, 4=2.03%, 10=1.82%, 20=3.74%, 50=16.81% 00:26:47.316 lat (msec) : 100=36.16%, 250=25.46%, 500=7.54%, 750=4.02%, 1000=1.77% 00:26:47.316 cpu : usr=0.25%, sys=1.00%, ctx=1141, majf=0, minf=4097 00:26:47.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:47.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.316 issued rwts: total=4682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.316 job1: (groupid=0, jobs=1): err= 0: pid=3028247: Mon Nov 18 18:33:43 2024 00:26:47.316 read: IOPS=137, BW=34.3MiB/s (36.0MB/s)(347MiB/10111msec) 00:26:47.316 slat (usec): min=12, max=524456, avg=7197.38, stdev=35552.99 00:26:47.316 clat (msec): min=46, max=1284, avg=458.35, stdev=307.91 00:26:47.316 lat (msec): min=46, max=1284, avg=465.54, stdev=312.94 00:26:47.316 clat percentiles (msec): 00:26:47.316 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 90], 20.00th=[ 146], 00:26:47.316 | 30.00th=[ 209], 40.00th=[ 305], 50.00th=[ 405], 60.00th=[ 558], 00:26:47.316 | 70.00th=[ 667], 80.00th=[ 760], 90.00th=[ 860], 95.00th=[ 961], 00:26:47.316 | 99.00th=[ 1217], 99.50th=[ 1267], 99.90th=[ 1267], 99.95th=[ 1284], 00:26:47.316 | 99.99th=[ 1284] 00:26:47.316 bw ( KiB/s): min= 3584, max=144384, per=4.31%, avg=33944.20, stdev=30645.92, samples=20 00:26:47.316 iops : min= 14, max= 564, avg=132.55, stdev=119.74, samples=20 00:26:47.316 lat (msec) : 50=0.29%, 100=13.53%, 250=20.81%, 500=20.73%, 750=22.68% 00:26:47.316 lat (msec) : 1000=17.13%, 2000=4.82% 00:26:47.316 cpu : usr=0.06%, sys=0.52%, ctx=134, majf=0, minf=4097 00:26:47.316 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:26:47.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.316 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.316 issued rwts: total=1389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.316 job2: (groupid=0, jobs=1): err= 0: pid=3028248: Mon Nov 18 18:33:43 2024 00:26:47.316 read: IOPS=511, BW=128MiB/s (134MB/s)(1290MiB/10078msec) 00:26:47.316 slat (usec): min=8, max=652647, avg=1279.77, stdev=10816.27 00:26:47.316 clat (usec): min=1450, max=858998, avg=123664.10, stdev=117654.38 00:26:47.316 lat (usec): min=1463, max=1057.0k, avg=124943.87, stdev=118977.10 00:26:47.316 clat percentiles (msec): 00:26:47.316 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 40], 00:26:47.316 | 30.00th=[ 54], 40.00th=[ 72], 50.00th=[ 106], 60.00th=[ 125], 00:26:47.316 | 70.00th=[ 138], 80.00th=[ 165], 90.00th=[ 222], 95.00th=[ 351], 00:26:47.316 | 99.00th=[ 676], 99.50th=[ 735], 99.90th=[ 860], 99.95th=[ 860], 00:26:47.316 | 99.99th=[ 860] 00:26:47.316 bw ( KiB/s): min=25088, max=305664, per=16.57%, avg=130417.50, stdev=64501.31, samples=20 00:26:47.316 iops : min= 98, max= 1194, avg=509.40, stdev=251.95, samples=20 00:26:47.316 lat (msec) : 2=0.08%, 4=0.19%, 10=1.28%, 20=3.51%, 50=23.73% 00:26:47.316 lat (msec) : 100=19.07%, 250=43.46%, 500=6.44%, 750=1.98%, 1000=0.27% 00:26:47.316 cpu : usr=0.21%, sys=1.14%, ctx=1226, majf=0, minf=3721 00:26:47.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:47.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.316 issued rwts: total=5159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.316 job3: (groupid=0, jobs=1): err= 0: pid=3028249: Mon Nov 18 18:33:43 2024 00:26:47.316 read: IOPS=222, BW=55.6MiB/s (58.3MB/s)(566MiB/10181msec) 00:26:47.316 slat (usec): min=8, max=664284, avg=3543.79, stdev=25783.89 00:26:47.316 clat (msec): min=2, max=1021, avg=284.19, stdev=233.62 00:26:47.316 lat (msec): min=3, max=1399, avg=287.73, stdev=237.48 00:26:47.316 clat percentiles (msec): 00:26:47.316 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 87], 00:26:47.316 | 30.00th=[ 100], 40.00th=[ 124], 50.00th=[ 218], 60.00th=[ 330], 00:26:47.316 | 70.00th=[ 372], 80.00th=[ 472], 90.00th=[ 642], 95.00th=[ 743], 00:26:47.316 | 99.00th=[ 911], 99.50th=[ 953], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:47.316 | 99.99th=[ 1020] 00:26:47.316 bw ( KiB/s): min=15840, max=216064, per=7.15%, avg=56308.05, stdev=48742.89, samples=20 00:26:47.316 iops : min= 61, max= 844, avg=219.90, stdev=190.42, samples=20 00:26:47.316 lat (msec) : 4=0.09%, 10=0.53%, 20=1.19%, 50=9.41%, 100=19.18% 00:26:47.316 lat (msec) : 250=20.59%, 500=30.80%, 750=13.88%, 1000=3.89%, 2000=0.44% 00:26:47.316 cpu : usr=0.07%, sys=0.53%, ctx=261, majf=0, minf=4097 00:26:47.316 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:47.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.316 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.316 job4: (groupid=0, jobs=1): err= 0: pid=3028250: Mon Nov 18 18:33:43 2024 00:26:47.316 read: IOPS=131, BW=32.9MiB/s (34.5MB/s)(333MiB/10116msec) 00:26:47.316 slat (usec): min=10, max=530597, avg=7263.21, stdev=37066.46 00:26:47.316 clat (msec): min=34, max=1174, avg=478.45, stdev=293.43 00:26:47.316 lat (msec): min=34, max=1174, avg=485.71, stdev=298.38 00:26:47.316 clat percentiles (msec): 00:26:47.316 | 1.00th=[ 64], 5.00th=[ 92], 10.00th=[ 136], 20.00th=[ 184], 00:26:47.316 | 30.00th=[ 268], 40.00th=[ 317], 50.00th=[ 414], 60.00th=[ 535], 00:26:47.316 | 70.00th=[ 642], 80.00th=[ 776], 90.00th=[ 911], 95.00th=[ 995], 00:26:47.316 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:47.316 | 99.99th=[ 1167] 00:26:47.317 bw ( KiB/s): min=10752, max=105472, per=4.12%, avg=32458.15, stdev=24830.51, samples=20 00:26:47.317 iops : min= 42, max= 412, avg=126.75, stdev=97.01, samples=20 00:26:47.317 lat (msec) : 50=0.38%, 100=6.83%, 250=20.65%, 500=26.88%, 750=23.12% 00:26:47.317 lat (msec) : 1000=17.42%, 2000=4.73% 00:26:47.317 cpu : usr=0.07%, sys=0.43%, ctx=140, majf=0, minf=4098 00:26:47.317 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job5: (groupid=0, jobs=1): err= 0: pid=3028251: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=528, BW=132MiB/s (139MB/s)(1346MiB/10174msec) 00:26:47.317 slat (usec): min=12, max=147198, avg=1858.99, stdev=8551.19 00:26:47.317 clat (msec): min=26, max=684, avg=119.01, stdev=117.61 00:26:47.317 lat (msec): min=28, max=684, avg=120.87, stdev=119.43 00:26:47.317 clat percentiles (msec): 00:26:47.317 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 44], 00:26:47.317 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 80], 00:26:47.317 | 70.00th=[ 109], 80.00th=[ 180], 90.00th=[ 321], 95.00th=[ 401], 00:26:47.317 | 99.00th=[ 489], 99.50th=[ 518], 99.90th=[ 634], 99.95th=[ 634], 00:26:47.317 | 99.99th=[ 684] 00:26:47.317 bw ( KiB/s): min=31232, max=338432, per=17.29%, avg=136128.60, stdev=110254.79, samples=20 00:26:47.317 iops : min= 122, max= 1322, avg=531.70, stdev=430.62, samples=20 00:26:47.317 lat (msec) : 50=39.52%, 100=27.28%, 250=17.95%, 500=14.53%, 750=0.72% 00:26:47.317 cpu : usr=0.21%, sys=1.81%, ctx=537, majf=0, minf=4097 00:26:47.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=5382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job6: (groupid=0, jobs=1): err= 0: pid=3028252: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=144, BW=36.2MiB/s (38.0MB/s)(365MiB/10078msec) 00:26:47.317 slat (usec): min=10, max=368873, avg=6018.37, stdev=32413.89 00:26:47.317 clat (msec): min=22, max=1268, avg=435.46, stdev=339.89 00:26:47.317 lat (msec): min=22, max=1403, avg=441.48, stdev=345.55 00:26:47.317 clat percentiles (msec): 00:26:47.317 | 1.00th=[ 82], 5.00th=[ 122], 10.00th=[ 129], 20.00th=[ 138], 00:26:47.317 | 30.00th=[ 144], 40.00th=[ 161], 50.00th=[ 192], 60.00th=[ 481], 00:26:47.317 | 70.00th=[ 709], 80.00th=[ 793], 90.00th=[ 936], 95.00th=[ 1036], 00:26:47.317 | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1267], 99.95th=[ 1267], 00:26:47.317 | 99.99th=[ 1267] 00:26:47.317 bw ( KiB/s): min= 5632, max=118784, per=4.54%, avg=35737.20, stdev=33727.39, samples=20 00:26:47.317 iops : min= 22, max= 464, avg=139.55, stdev=131.78, samples=20 00:26:47.317 lat (msec) : 50=0.21%, 100=2.47%, 250=48.63%, 500=8.77%, 750=14.38% 00:26:47.317 lat (msec) : 1000=18.42%, 2000=7.12% 00:26:47.317 cpu : usr=0.04%, sys=0.65%, ctx=295, majf=0, minf=4097 00:26:47.317 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=1460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job7: (groupid=0, jobs=1): err= 0: pid=3028253: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=176, BW=44.1MiB/s (46.2MB/s)(448MiB/10175msec) 00:26:47.317 slat (usec): min=8, max=399273, avg=4309.58, stdev=28545.35 00:26:47.317 clat (msec): min=7, max=923, avg=358.63, stdev=238.17 00:26:47.317 lat (msec): min=7, max=1231, avg=362.94, stdev=242.92 00:26:47.317 clat percentiles (msec): 00:26:47.317 | 1.00th=[ 47], 5.00th=[ 89], 10.00th=[ 109], 20.00th=[ 130], 00:26:47.317 | 30.00th=[ 161], 40.00th=[ 232], 50.00th=[ 309], 60.00th=[ 372], 00:26:47.317 | 70.00th=[ 498], 80.00th=[ 592], 90.00th=[ 693], 95.00th=[ 877], 00:26:47.317 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 919], 99.95th=[ 927], 00:26:47.317 | 99.99th=[ 927] 00:26:47.317 bw ( KiB/s): min= 9216, max=126464, per=5.62%, avg=44259.75, stdev=31996.45, samples=20 00:26:47.317 iops : min= 36, max= 494, avg=172.85, stdev=125.01, samples=20 00:26:47.317 lat (msec) : 10=0.06%, 50=1.34%, 100=4.68%, 250=35.58%, 500=28.39% 00:26:47.317 lat (msec) : 750=21.81%, 1000=8.14% 00:26:47.317 cpu : usr=0.04%, sys=0.45%, ctx=194, majf=0, minf=4098 00:26:47.317 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job8: (groupid=0, jobs=1): err= 0: pid=3028254: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=123, BW=30.9MiB/s (32.5MB/s)(313MiB/10114msec) 00:26:47.317 slat (usec): min=12, max=501335, avg=7982.78, stdev=39913.38 00:26:47.317 clat (msec): min=71, max=1506, avg=508.62, stdev=355.09 00:26:47.317 lat (msec): min=72, max=1548, avg=516.60, stdev=360.58 00:26:47.317 clat percentiles (msec): 00:26:47.317 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 104], 20.00th=[ 169], 00:26:47.317 | 30.00th=[ 247], 40.00th=[ 309], 50.00th=[ 426], 60.00th=[ 567], 00:26:47.317 | 70.00th=[ 693], 80.00th=[ 844], 90.00th=[ 978], 95.00th=[ 1167], 00:26:47.317 | 99.00th=[ 1452], 99.50th=[ 1502], 99.90th=[ 1502], 99.95th=[ 1502], 00:26:47.317 | 99.99th=[ 1502] 00:26:47.317 bw ( KiB/s): min= 5632, max=135168, per=3.87%, avg=30439.75, stdev=28191.30, samples=20 00:26:47.317 iops : min= 22, max= 528, avg=118.85, stdev=110.14, samples=20 00:26:47.317 lat (msec) : 100=9.27%, 250=21.01%, 500=24.68%, 750=17.89%, 1000=17.25% 00:26:47.317 lat (msec) : 2000=9.90% 00:26:47.317 cpu : usr=0.06%, sys=0.47%, ctx=123, majf=0, minf=4097 00:26:47.317 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=95.0% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=1252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job9: (groupid=0, jobs=1): err= 0: pid=3028255: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=368, BW=92.2MiB/s (96.7MB/s)(938MiB/10172msec) 00:26:47.317 slat (usec): min=8, max=179405, avg=1153.08, stdev=7510.44 00:26:47.317 clat (msec): min=2, max=1177, avg=172.26, stdev=207.38 00:26:47.317 lat (msec): min=2, max=1177, avg=173.41, stdev=208.04 00:26:47.317 clat percentiles (msec): 00:26:47.317 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 41], 00:26:47.317 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 80], 60.00th=[ 140], 00:26:47.317 | 70.00th=[ 180], 80.00th=[ 284], 90.00th=[ 397], 95.00th=[ 659], 00:26:47.317 | 99.00th=[ 936], 99.50th=[ 1053], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:47.317 | 99.99th=[ 1183] 00:26:47.317 bw ( KiB/s): min=12825, max=371200, per=11.99%, avg=94384.05, stdev=96935.37, samples=20 00:26:47.317 iops : min= 50, max= 1450, avg=368.65, stdev=378.68, samples=20 00:26:47.317 lat (msec) : 4=0.13%, 10=0.48%, 20=3.25%, 50=40.26%, 100=8.88% 00:26:47.317 lat (msec) : 250=25.94%, 500=12.98%, 750=4.13%, 1000=3.44%, 2000=0.51% 00:26:47.317 cpu : usr=0.09%, sys=0.88%, ctx=756, majf=0, minf=4097 00:26:47.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=3751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 job10: (groupid=0, jobs=1): err= 0: pid=3028256: Mon Nov 18 18:33:43 2024 00:26:47.317 read: IOPS=279, BW=69.9MiB/s (73.3MB/s)(711MiB/10179msec) 00:26:47.317 slat (usec): min=8, max=481586, avg=1620.46, stdev=13063.24 00:26:47.317 clat (usec): min=670, max=1217.7k, avg=227128.42, stdev=199877.61 00:26:47.317 lat (usec): min=721, max=1217.8k, avg=228748.88, stdev=201083.47 00:26:47.317 clat percentiles (usec): 00:26:47.317 | 1.00th=[ 971], 5.00th=[ 9372], 10.00th=[ 43779], 00:26:47.317 | 20.00th=[ 95945], 30.00th=[ 117965], 40.00th=[ 145753], 00:26:47.317 | 50.00th=[ 168821], 60.00th=[ 206570], 70.00th=[ 263193], 00:26:47.317 | 80.00th=[ 316670], 90.00th=[ 400557], 95.00th=[ 725615], 00:26:47.317 | 99.00th=[1002439], 99.50th=[1044382], 99.90th=[1199571], 00:26:47.317 | 99.95th=[1216349], 99.99th=[1216349] 00:26:47.317 bw ( KiB/s): min= 9216, max=148480, per=9.04%, avg=71199.55, stdev=37989.19, samples=20 00:26:47.317 iops : min= 36, max= 580, avg=278.10, stdev=148.39, samples=20 00:26:47.317 lat (usec) : 750=0.04%, 1000=1.30% 00:26:47.317 lat (msec) : 2=3.23%, 4=0.04%, 10=0.42%, 20=0.46%, 50=5.83% 00:26:47.317 lat (msec) : 100=9.28%, 250=47.56%, 500=24.29%, 750=4.18%, 1000=2.32% 00:26:47.317 lat (msec) : 2000=1.05% 00:26:47.317 cpu : usr=0.09%, sys=0.88%, ctx=912, majf=0, minf=4097 00:26:47.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:47.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.317 issued rwts: total=2845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.317 00:26:47.317 Run status group 0 (all jobs): 00:26:47.317 READ: bw=769MiB/s (806MB/s), 30.9MiB/s-132MiB/s (32.5MB/s-139MB/s), io=7827MiB (8207MB), run=10078-10181msec 00:26:47.317 00:26:47.317 Disk stats (read/write): 00:26:47.317 nvme0n1: ios=9218/0, merge=0/0, ticks=1240383/0, in_queue=1240383, util=97.39% 00:26:47.317 nvme10n1: ios=2642/0, merge=0/0, ticks=1233170/0, in_queue=1233170, util=97.60% 00:26:47.317 nvme1n1: ios=10136/0, merge=0/0, ticks=1239768/0, in_queue=1239768, util=97.86% 00:26:47.317 nvme2n1: ios=4371/0, merge=0/0, ticks=1225169/0, in_queue=1225169, util=97.98% 00:26:47.317 nvme3n1: ios=2507/0, merge=0/0, ticks=1237012/0, in_queue=1237012, util=98.05% 00:26:47.317 nvme4n1: ios=10636/0, merge=0/0, ticks=1219802/0, in_queue=1219802, util=98.35% 00:26:47.317 nvme5n1: ios=2736/0, merge=0/0, ticks=1240971/0, in_queue=1240971, util=98.51% 00:26:47.317 nvme6n1: ios=3458/0, merge=0/0, ticks=1225341/0, in_queue=1225341, util=98.60% 00:26:47.318 nvme7n1: ios=2376/0, merge=0/0, ticks=1227625/0, in_queue=1227625, util=98.96% 00:26:47.318 nvme8n1: ios=7328/0, merge=0/0, ticks=1230575/0, in_queue=1230575, util=99.12% 00:26:47.318 nvme9n1: ios=5443/0, merge=0/0, ticks=1236986/0, in_queue=1236986, util=99.23% 00:26:47.318 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:47.318 [global] 00:26:47.318 thread=1 00:26:47.318 invalidate=1 00:26:47.318 rw=randwrite 00:26:47.318 time_based=1 00:26:47.318 runtime=10 00:26:47.318 ioengine=libaio 00:26:47.318 direct=1 00:26:47.318 bs=262144 00:26:47.318 iodepth=64 00:26:47.318 norandommap=1 00:26:47.318 numjobs=1 00:26:47.318 00:26:47.318 [job0] 00:26:47.318 filename=/dev/nvme0n1 00:26:47.318 [job1] 00:26:47.318 filename=/dev/nvme10n1 00:26:47.318 [job2] 00:26:47.318 filename=/dev/nvme1n1 00:26:47.318 [job3] 00:26:47.318 filename=/dev/nvme2n1 00:26:47.318 [job4] 00:26:47.318 filename=/dev/nvme3n1 00:26:47.318 [job5] 00:26:47.318 filename=/dev/nvme4n1 00:26:47.318 [job6] 00:26:47.318 filename=/dev/nvme5n1 00:26:47.318 [job7] 00:26:47.318 filename=/dev/nvme6n1 00:26:47.318 [job8] 00:26:47.318 filename=/dev/nvme7n1 00:26:47.318 [job9] 00:26:47.318 filename=/dev/nvme8n1 00:26:47.318 [job10] 00:26:47.318 filename=/dev/nvme9n1 00:26:47.318 Could not set queue depth (nvme0n1) 00:26:47.318 Could not set queue depth (nvme10n1) 00:26:47.318 Could not set queue depth (nvme1n1) 00:26:47.318 Could not set queue depth (nvme2n1) 00:26:47.318 Could not set queue depth (nvme3n1) 00:26:47.318 Could not set queue depth (nvme4n1) 00:26:47.318 Could not set queue depth (nvme5n1) 00:26:47.318 Could not set queue depth (nvme6n1) 00:26:47.318 Could not set queue depth (nvme7n1) 00:26:47.318 Could not set queue depth (nvme8n1) 00:26:47.318 Could not set queue depth (nvme9n1) 00:26:47.318 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.318 fio-3.35 00:26:47.318 Starting 11 threads 00:26:57.325 00:26:57.325 job0: (groupid=0, jobs=1): err= 0: pid=3028977: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=335, BW=84.0MiB/s (88.1MB/s)(854MiB/10162msec); 0 zone resets 00:26:57.325 slat (usec): min=15, max=82670, avg=2191.99, stdev=6126.48 00:26:57.325 clat (usec): min=1030, max=571596, avg=188187.00, stdev=133099.95 00:26:57.325 lat (usec): min=1054, max=579511, avg=190378.99, stdev=134772.02 00:26:57.325 clat percentiles (msec): 00:26:57.325 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 63], 00:26:57.325 | 30.00th=[ 68], 40.00th=[ 109], 50.00th=[ 171], 60.00th=[ 245], 00:26:57.325 | 70.00th=[ 284], 80.00th=[ 309], 90.00th=[ 355], 95.00th=[ 401], 00:26:57.325 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 567], 00:26:57.325 | 99.99th=[ 575] 00:26:57.325 bw ( KiB/s): min=36352, max=257024, per=11.57%, avg=85792.05, stdev=57321.63, samples=20 00:26:57.325 iops : min= 142, max= 1004, avg=335.10, stdev=223.92, samples=20 00:26:57.325 lat (msec) : 2=0.29%, 4=0.53%, 10=0.94%, 20=1.76%, 50=12.42% 00:26:57.325 lat (msec) : 100=23.05%, 250=22.26%, 500=36.35%, 750=2.40% 00:26:57.325 cpu : usr=0.92%, sys=1.16%, ctx=1821, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.325 issued rwts: total=0,3414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.325 job1: (groupid=0, jobs=1): err= 0: pid=3028990: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=247, BW=61.9MiB/s (64.9MB/s)(631MiB/10202msec); 0 zone resets 00:26:57.325 slat (usec): min=19, max=90392, avg=2334.50, stdev=7832.81 00:26:57.325 clat (usec): min=1015, max=656671, avg=256104.42, stdev=184933.16 00:26:57.325 lat (usec): min=1046, max=656706, avg=258438.92, stdev=186711.91 00:26:57.325 clat percentiles (usec): 00:26:57.325 | 1.00th=[ 1958], 5.00th=[ 4178], 10.00th=[ 9634], 20.00th=[ 36963], 00:26:57.325 | 30.00th=[149947], 40.00th=[200279], 50.00th=[265290], 60.00th=[308282], 00:26:57.325 | 70.00th=[346031], 80.00th=[429917], 90.00th=[534774], 95.00th=[583009], 00:26:57.325 | 99.00th=[641729], 99.50th=[650118], 99.90th=[658506], 99.95th=[658506], 00:26:57.325 | 99.99th=[658506] 00:26:57.325 bw ( KiB/s): min=25600, max=201728, per=8.50%, avg=63001.60, stdev=39381.97, samples=20 00:26:57.325 iops : min= 100, max= 788, avg=246.10, stdev=153.84, samples=20 00:26:57.325 lat (msec) : 2=1.07%, 4=3.25%, 10=6.02%, 20=5.86%, 50=6.06% 00:26:57.325 lat (msec) : 100=5.50%, 250=20.40%, 500=40.48%, 750=11.37% 00:26:57.325 cpu : usr=0.71%, sys=0.83%, ctx=1684, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.325 issued rwts: total=0,2525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.325 job2: (groupid=0, jobs=1): err= 0: pid=3028991: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=212, BW=53.0MiB/s (55.6MB/s)(543MiB/10227msec); 0 zone resets 00:26:57.325 slat (usec): min=17, max=57776, avg=2895.22, stdev=8791.12 00:26:57.325 clat (usec): min=1003, max=690413, avg=298547.15, stdev=201455.72 00:26:57.325 lat (usec): min=1044, max=699068, avg=301442.38, stdev=203769.98 00:26:57.325 clat percentiles (usec): 00:26:57.325 | 1.00th=[ 1795], 5.00th=[ 4555], 10.00th=[ 8291], 20.00th=[ 19530], 00:26:57.325 | 30.00th=[162530], 40.00th=[258999], 50.00th=[341836], 60.00th=[400557], 00:26:57.325 | 70.00th=[442500], 80.00th=[467665], 90.00th=[549454], 95.00th=[608175], 00:26:57.325 | 99.00th=[658506], 99.50th=[666895], 99.90th=[683672], 99.95th=[692061], 00:26:57.325 | 99.99th=[692061] 00:26:57.325 bw ( KiB/s): min=26624, max=157696, per=7.27%, avg=53922.10, stdev=32012.90, samples=20 00:26:57.325 iops : min= 104, max= 616, avg=210.60, stdev=125.02, samples=20 00:26:57.325 lat (msec) : 2=1.20%, 4=2.12%, 10=9.72%, 20=7.00%, 50=1.52% 00:26:57.325 lat (msec) : 100=3.32%, 250=14.10%, 500=45.39%, 750=15.62% 00:26:57.325 cpu : usr=0.62%, sys=0.71%, ctx=1386, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.325 issued rwts: total=0,2170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.325 job3: (groupid=0, jobs=1): err= 0: pid=3028992: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=463, BW=116MiB/s (122MB/s)(1168MiB/10072msec); 0 zone resets 00:26:57.325 slat (usec): min=14, max=318036, avg=1378.40, stdev=6174.07 00:26:57.325 clat (usec): min=1120, max=941554, avg=136512.77, stdev=139650.36 00:26:57.325 lat (usec): min=1146, max=941594, avg=137891.17, stdev=140560.89 00:26:57.325 clat percentiles (msec): 00:26:57.325 | 1.00th=[ 3], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:26:57.325 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 77], 00:26:57.325 | 70.00th=[ 165], 80.00th=[ 232], 90.00th=[ 309], 95.00th=[ 414], 00:26:57.325 | 99.00th=[ 659], 99.50th=[ 852], 99.90th=[ 911], 99.95th=[ 936], 00:26:57.325 | 99.99th=[ 944] 00:26:57.325 bw ( KiB/s): min=26112, max=290816, per=15.92%, avg=118020.35, stdev=91012.81, samples=20 00:26:57.325 iops : min= 102, max= 1136, avg=461.00, stdev=355.53, samples=20 00:26:57.325 lat (msec) : 2=0.43%, 4=1.13%, 10=1.39%, 20=0.04%, 50=1.03% 00:26:57.325 lat (msec) : 100=61.01%, 250=17.55%, 500=14.25%, 750=2.40%, 1000=0.77% 00:26:57.325 cpu : usr=1.53%, sys=1.54%, ctx=1899, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.325 issued rwts: total=0,4673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.325 job4: (groupid=0, jobs=1): err= 0: pid=3028993: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=217, BW=54.5MiB/s (57.1MB/s)(557MiB/10223msec); 0 zone resets 00:26:57.325 slat (usec): min=16, max=173412, avg=4445.10, stdev=9339.31 00:26:57.325 clat (msec): min=3, max=626, avg=289.17, stdev=109.63 00:26:57.325 lat (msec): min=3, max=626, avg=293.61, stdev=110.93 00:26:57.325 clat percentiles (msec): 00:26:57.325 | 1.00th=[ 126], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 199], 00:26:57.325 | 30.00th=[ 230], 40.00th=[ 251], 50.00th=[ 268], 60.00th=[ 296], 00:26:57.325 | 70.00th=[ 330], 80.00th=[ 397], 90.00th=[ 447], 95.00th=[ 493], 00:26:57.325 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 600], 99.95th=[ 625], 00:26:57.325 | 99.99th=[ 625] 00:26:57.325 bw ( KiB/s): min=30720, max=105472, per=7.47%, avg=55379.75, stdev=18844.96, samples=20 00:26:57.325 iops : min= 120, max= 412, avg=216.30, stdev=73.59, samples=20 00:26:57.325 lat (msec) : 4=0.04%, 10=0.45%, 250=39.11%, 500=55.73%, 750=4.67% 00:26:57.325 cpu : usr=0.66%, sys=0.69%, ctx=583, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.325 issued rwts: total=0,2227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.325 job5: (groupid=0, jobs=1): err= 0: pid=3028994: Mon Nov 18 18:33:54 2024 00:26:57.325 write: IOPS=203, BW=50.9MiB/s (53.3MB/s)(519MiB/10198msec); 0 zone resets 00:26:57.325 slat (usec): min=17, max=64066, avg=3681.22, stdev=9808.45 00:26:57.325 clat (msec): min=3, max=666, avg=310.68, stdev=188.34 00:26:57.325 lat (msec): min=5, max=666, avg=314.36, stdev=191.21 00:26:57.325 clat percentiles (msec): 00:26:57.325 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 46], 20.00th=[ 105], 00:26:57.325 | 30.00th=[ 161], 40.00th=[ 241], 50.00th=[ 351], 60.00th=[ 414], 00:26:57.325 | 70.00th=[ 439], 80.00th=[ 468], 90.00th=[ 558], 95.00th=[ 609], 00:26:57.325 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], 00:26:57.325 | 99.99th=[ 667] 00:26:57.325 bw ( KiB/s): min=22528, max=118272, per=6.95%, avg=51518.40, stdev=27464.90, samples=20 00:26:57.325 iops : min= 88, max= 462, avg=201.20, stdev=107.18, samples=20 00:26:57.325 lat (msec) : 4=0.05%, 10=0.72%, 20=3.52%, 50=6.80%, 100=7.90% 00:26:57.325 lat (msec) : 250=21.93%, 500=42.55%, 750=16.53% 00:26:57.325 cpu : usr=0.62%, sys=0.71%, ctx=1149, majf=0, minf=1 00:26:57.325 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,2075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 job6: (groupid=0, jobs=1): err= 0: pid=3028995: Mon Nov 18 18:33:54 2024 00:26:57.326 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(678MiB/10055msec); 0 zone resets 00:26:57.326 slat (usec): min=14, max=66190, avg=2661.56, stdev=8084.02 00:26:57.326 clat (usec): min=987, max=660276, avg=234716.96, stdev=187877.80 00:26:57.326 lat (usec): min=1019, max=660312, avg=237378.51, stdev=190278.38 00:26:57.326 clat percentiles (usec): 00:26:57.326 | 1.00th=[ 1795], 5.00th=[ 6063], 10.00th=[ 12125], 20.00th=[ 39060], 00:26:57.326 | 30.00th=[ 67634], 40.00th=[152044], 50.00th=[196084], 60.00th=[287310], 00:26:57.326 | 70.00th=[341836], 80.00th=[434111], 90.00th=[513803], 95.00th=[574620], 00:26:57.326 | 99.00th=[624952], 99.50th=[641729], 99.90th=[650118], 99.95th=[650118], 00:26:57.326 | 99.99th=[658506] 00:26:57.326 bw ( KiB/s): min=26624, max=273920, per=9.14%, avg=67767.30, stdev=57168.79, samples=20 00:26:57.326 iops : min= 104, max= 1070, avg=264.70, stdev=223.32, samples=20 00:26:57.326 lat (usec) : 1000=0.04% 00:26:57.326 lat (msec) : 2=1.22%, 4=1.66%, 10=6.13%, 20=3.65%, 50=11.22% 00:26:57.326 lat (msec) : 100=11.59%, 250=19.11%, 500=34.54%, 750=10.85% 00:26:57.326 cpu : usr=0.81%, sys=0.83%, ctx=1614, majf=0, minf=1 00:26:57.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,2710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 job7: (groupid=0, jobs=1): err= 0: pid=3028996: Mon Nov 18 18:33:54 2024 00:26:57.326 write: IOPS=231, BW=57.9MiB/s (60.7MB/s)(593MiB/10229msec); 0 zone resets 00:26:57.326 slat (usec): min=22, max=44944, avg=3264.90, stdev=8908.38 00:26:57.326 clat (usec): min=1125, max=633279, avg=272768.42, stdev=194557.43 00:26:57.326 lat (usec): min=1193, max=639430, avg=276033.32, stdev=197179.02 00:26:57.326 clat percentiles (msec): 00:26:57.326 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 31], 20.00th=[ 64], 00:26:57.326 | 30.00th=[ 86], 40.00th=[ 192], 50.00th=[ 251], 60.00th=[ 376], 00:26:57.326 | 70.00th=[ 430], 80.00th=[ 456], 90.00th=[ 535], 95.00th=[ 584], 00:26:57.326 | 99.00th=[ 625], 99.50th=[ 634], 99.90th=[ 634], 99.95th=[ 634], 00:26:57.326 | 99.99th=[ 634] 00:26:57.326 bw ( KiB/s): min=24576, max=216576, per=7.96%, avg=59041.40, stdev=48943.45, samples=20 00:26:57.326 iops : min= 96, max= 846, avg=230.60, stdev=191.17, samples=20 00:26:57.326 lat (msec) : 2=0.93%, 4=1.48%, 10=2.32%, 20=1.10%, 50=10.72% 00:26:57.326 lat (msec) : 100=17.13%, 250=15.91%, 500=37.64%, 750=12.78% 00:26:57.326 cpu : usr=0.66%, sys=0.74%, ctx=1279, majf=0, minf=1 00:26:57.326 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,2370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 job8: (groupid=0, jobs=1): err= 0: pid=3028997: Mon Nov 18 18:33:54 2024 00:26:57.326 write: IOPS=168, BW=42.2MiB/s (44.3MB/s)(432MiB/10225msec); 0 zone resets 00:26:57.326 slat (usec): min=18, max=142233, avg=4588.71, stdev=11618.34 00:26:57.326 clat (msec): min=2, max=754, avg=373.78, stdev=180.97 00:26:57.326 lat (msec): min=2, max=756, avg=378.37, stdev=183.31 00:26:57.326 clat percentiles (msec): 00:26:57.326 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 68], 20.00th=[ 224], 00:26:57.326 | 30.00th=[ 313], 40.00th=[ 359], 50.00th=[ 405], 60.00th=[ 435], 00:26:57.326 | 70.00th=[ 460], 80.00th=[ 502], 90.00th=[ 609], 95.00th=[ 676], 00:26:57.326 | 99.00th=[ 726], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:26:57.326 | 99.99th=[ 751] 00:26:57.326 bw ( KiB/s): min=24576, max=82432, per=5.75%, avg=42628.15, stdev=15028.13, samples=20 00:26:57.326 iops : min= 96, max= 322, avg=166.50, stdev=58.70, samples=20 00:26:57.326 lat (msec) : 4=0.41%, 10=2.49%, 20=0.58%, 50=5.27%, 100=3.59% 00:26:57.326 lat (msec) : 250=11.17%, 500=56.02%, 750=20.43%, 1000=0.06% 00:26:57.326 cpu : usr=0.38%, sys=0.63%, ctx=848, majf=0, minf=1 00:26:57.326 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.4% 00:26:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.326 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,1728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 job9: (groupid=0, jobs=1): err= 0: pid=3028998: Mon Nov 18 18:33:54 2024 00:26:57.326 write: IOPS=349, BW=87.4MiB/s (91.6MB/s)(891MiB/10195msec); 0 zone resets 00:26:57.326 slat (usec): min=15, max=37539, avg=2261.22, stdev=5660.73 00:26:57.326 clat (msec): min=2, max=581, avg=180.70, stdev=123.02 00:26:57.326 lat (msec): min=2, max=588, avg=182.96, stdev=124.41 00:26:57.326 clat percentiles (msec): 00:26:57.326 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 54], 20.00th=[ 58], 00:26:57.326 | 30.00th=[ 72], 40.00th=[ 133], 50.00th=[ 167], 60.00th=[ 228], 00:26:57.326 | 70.00th=[ 249], 80.00th=[ 284], 90.00th=[ 338], 95.00th=[ 393], 00:26:57.326 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 575], 99.95th=[ 575], 00:26:57.326 | 99.99th=[ 584] 00:26:57.326 bw ( KiB/s): min=38912, max=264192, per=12.09%, avg=89625.60, stdev=51284.13, samples=20 00:26:57.326 iops : min= 152, max= 1032, avg=350.10, stdev=200.33, samples=20 00:26:57.326 lat (msec) : 4=0.17%, 10=2.55%, 20=4.60%, 50=1.66%, 100=27.58% 00:26:57.326 lat (msec) : 250=34.57%, 500=27.33%, 750=1.54% 00:26:57.326 cpu : usr=0.86%, sys=1.05%, ctx=1573, majf=0, minf=1 00:26:57.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,3564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 job10: (groupid=0, jobs=1): err= 0: pid=3028999: Mon Nov 18 18:33:54 2024 00:26:57.326 write: IOPS=212, BW=53.2MiB/s (55.8MB/s)(541MiB/10175msec); 0 zone resets 00:26:57.326 slat (usec): min=22, max=48937, avg=3859.65, stdev=9257.67 00:26:57.326 clat (msec): min=5, max=648, avg=296.88, stdev=153.82 00:26:57.326 lat (msec): min=5, max=648, avg=300.74, stdev=156.23 00:26:57.326 clat percentiles (msec): 00:26:57.326 | 1.00th=[ 15], 5.00th=[ 59], 10.00th=[ 102], 20.00th=[ 159], 00:26:57.326 | 30.00th=[ 215], 40.00th=[ 247], 50.00th=[ 288], 60.00th=[ 321], 00:26:57.326 | 70.00th=[ 368], 80.00th=[ 426], 90.00th=[ 542], 95.00th=[ 600], 00:26:57.326 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 651], 99.95th=[ 651], 00:26:57.326 | 99.99th=[ 651] 00:26:57.326 bw ( KiB/s): min=26624, max=127230, per=7.26%, avg=53798.30, stdev=24393.15, samples=20 00:26:57.326 iops : min= 104, max= 496, avg=210.10, stdev=95.13, samples=20 00:26:57.326 lat (msec) : 10=0.32%, 20=1.76%, 50=2.36%, 100=5.36%, 250=31.28% 00:26:57.326 lat (msec) : 500=48.01%, 750=10.91% 00:26:57.326 cpu : usr=0.64%, sys=0.76%, ctx=913, majf=0, minf=1 00:26:57.326 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.326 issued rwts: total=0,2164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.326 00:26:57.326 Run status group 0 (all jobs): 00:26:57.326 WRITE: bw=724MiB/s (759MB/s), 42.2MiB/s-116MiB/s (44.3MB/s-122MB/s), io=7405MiB (7765MB), run=10055-10229msec 00:26:57.326 00:26:57.326 Disk stats (read/write): 00:26:57.326 nvme0n1: ios=41/6635, merge=0/0, ticks=610/1206281, in_queue=1206891, util=100.00% 00:26:57.326 nvme10n1: ios=45/5031, merge=0/0, ticks=56/1252339, in_queue=1252395, util=97.79% 00:26:57.326 nvme1n1: ios=0/4308, merge=0/0, ticks=0/1248771, in_queue=1248771, util=97.71% 00:26:57.326 nvme2n1: ios=36/9135, merge=0/0, ticks=243/1216305, in_queue=1216548, util=98.65% 00:26:57.326 nvme3n1: ios=34/4419, merge=0/0, ticks=1553/1230904, in_queue=1232457, util=100.00% 00:26:57.326 nvme4n1: ios=0/4136, merge=0/0, ticks=0/1244283, in_queue=1244283, util=98.24% 00:26:57.326 nvme5n1: ios=24/5177, merge=0/0, ticks=191/1224803, in_queue=1224994, util=100.00% 00:26:57.326 nvme6n1: ios=0/4706, merge=0/0, ticks=0/1241874, in_queue=1241874, util=98.49% 00:26:57.326 nvme7n1: ios=40/3426, merge=0/0, ticks=1718/1240148, in_queue=1241866, util=100.00% 00:26:57.326 nvme8n1: ios=0/7109, merge=0/0, ticks=0/1242170, in_queue=1242170, util=98.96% 00:26:57.326 nvme9n1: ios=24/4178, merge=0/0, ticks=1401/1195929, in_queue=1197330, util=100.00% 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:57.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.326 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:57.327 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.327 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:57.585 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:57.585 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:57.585 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.585 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.585 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.843 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:58.101 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.101 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:58.359 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.359 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:58.618 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.618 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:58.876 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.876 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:59.134 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.134 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:59.404 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.404 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:59.665 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.665 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:59.923 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.923 rmmod nvme_tcp 00:26:59.923 rmmod nvme_fabrics 00:26:59.923 rmmod nvme_keyring 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3023281 ']' 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3023281 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3023281 ']' 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3023281 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3023281 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3023281' 00:26:59.923 killing process with pid 3023281 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3023281 00:26:59.923 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3023281 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.204 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.105 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.105 00:27:05.105 real 1m5.184s 00:27:05.105 user 3m48.034s 00:27:05.105 sys 0m15.019s 00:27:05.105 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.105 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:05.105 ************************************ 00:27:05.106 END TEST nvmf_multiconnection 00:27:05.106 ************************************ 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:05.106 ************************************ 00:27:05.106 START TEST nvmf_initiator_timeout 00:27:05.106 ************************************ 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:05.106 * Looking for test storage... 00:27:05.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:05.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.106 --rc genhtml_branch_coverage=1 00:27:05.106 --rc genhtml_function_coverage=1 00:27:05.106 --rc genhtml_legend=1 00:27:05.106 --rc geninfo_all_blocks=1 00:27:05.106 --rc geninfo_unexecuted_blocks=1 00:27:05.106 00:27:05.106 ' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:05.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.106 --rc genhtml_branch_coverage=1 00:27:05.106 --rc genhtml_function_coverage=1 00:27:05.106 --rc genhtml_legend=1 00:27:05.106 --rc geninfo_all_blocks=1 00:27:05.106 --rc geninfo_unexecuted_blocks=1 00:27:05.106 00:27:05.106 ' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:05.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.106 --rc genhtml_branch_coverage=1 00:27:05.106 --rc genhtml_function_coverage=1 00:27:05.106 --rc genhtml_legend=1 00:27:05.106 --rc geninfo_all_blocks=1 00:27:05.106 --rc geninfo_unexecuted_blocks=1 00:27:05.106 00:27:05.106 ' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:05.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.106 --rc genhtml_branch_coverage=1 00:27:05.106 --rc genhtml_function_coverage=1 00:27:05.106 --rc genhtml_legend=1 00:27:05.106 --rc geninfo_all_blocks=1 00:27:05.106 --rc geninfo_unexecuted_blocks=1 00:27:05.106 00:27:05.106 ' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.106 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:05.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.107 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:07.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.640 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:07.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:07.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:07.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:27:07.641 00:27:07.641 --- 10.0.0.2 ping statistics --- 00:27:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.641 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:07.641 00:27:07.641 --- 10.0.0.1 ping statistics --- 00:27:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.641 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3032585 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3032585 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3032585 ']' 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.641 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.641 [2024-11-18 18:34:05.636328] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:07.641 [2024-11-18 18:34:05.636475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.641 [2024-11-18 18:34:05.791231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.641 [2024-11-18 18:34:05.937441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.641 [2024-11-18 18:34:05.937536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.641 [2024-11-18 18:34:05.937563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.641 [2024-11-18 18:34:05.937588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.641 [2024-11-18 18:34:05.937618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.641 [2024-11-18 18:34:05.940500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.641 [2024-11-18 18:34:05.940560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.641 [2024-11-18 18:34:05.940627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.641 [2024-11-18 18:34:05.940634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 Malloc0 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 Delay0 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 [2024-11-18 18:34:06.782223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.577 [2024-11-18 18:34:06.812140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.577 18:34:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:09.145 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:09.145 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:09.145 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:09.145 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:09.145 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:11.672 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:11.672 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:11.672 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:11.672 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:11.672 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:11.673 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:11.673 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3033021 00:27:11.673 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:11.673 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:11.673 [global] 00:27:11.673 thread=1 00:27:11.673 invalidate=1 00:27:11.673 rw=write 00:27:11.673 time_based=1 00:27:11.673 runtime=60 00:27:11.673 ioengine=libaio 00:27:11.673 direct=1 00:27:11.673 bs=4096 00:27:11.673 iodepth=1 00:27:11.673 norandommap=0 00:27:11.673 numjobs=1 00:27:11.673 00:27:11.673 verify_dump=1 00:27:11.673 verify_backlog=512 00:27:11.673 verify_state_save=0 00:27:11.673 do_verify=1 00:27:11.673 verify=crc32c-intel 00:27:11.673 [job0] 00:27:11.673 filename=/dev/nvme0n1 00:27:11.673 Could not set queue depth (nvme0n1) 00:27:11.673 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:11.673 fio-3.35 00:27:11.673 Starting 1 thread 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.202 true 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.202 true 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.202 true 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.202 true 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.202 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.484 true 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.484 true 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.484 true 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.484 true 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:17.484 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3033021 00:28:13.692 00:28:13.692 job0: (groupid=0, jobs=1): err= 0: pid=3033090: Mon Nov 18 18:35:09 2024 00:28:13.692 read: IOPS=146, BW=587KiB/s (601kB/s)(34.4MiB/60012msec) 00:28:13.692 slat (usec): min=4, max=7775, avg=13.41, stdev=83.10 00:28:13.692 clat (usec): min=249, max=42022, avg=1851.22, stdev=7680.36 00:28:13.692 lat (usec): min=256, max=48938, avg=1864.62, stdev=7687.63 00:28:13.692 clat percentiles (usec): 00:28:13.692 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:28:13.692 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 351], 00:28:13.692 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 449], 95.00th=[ 529], 00:28:13.692 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:28:13.692 | 99.99th=[42206] 00:28:13.692 write: IOPS=153, BW=614KiB/s (629kB/s)(36.0MiB/60012msec); 0 zone resets 00:28:13.692 slat (nsec): min=6039, max=83117, avg=15564.40, stdev=9665.12 00:28:13.692 clat (usec): min=196, max=40868k, avg=4707.77, stdev=425709.87 00:28:13.692 lat (usec): min=204, max=40868k, avg=4723.34, stdev=425710.11 00:28:13.692 clat percentiles (usec): 00:28:13.692 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 00:28:13.692 | 20.00th=[ 227], 30.00th=[ 233], 40.00th=[ 239], 00:28:13.692 | 50.00th=[ 249], 60.00th=[ 265], 70.00th=[ 281], 00:28:13.692 | 80.00th=[ 310], 90.00th=[ 379], 95.00th=[ 424], 00:28:13.692 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 494], 00:28:13.692 | 99.95th=[ 553], 99.99th=[17112761] 00:28:13.692 bw ( KiB/s): min= 136, max= 8192, per=100.00%, avg=4608.00, stdev=2715.54, samples=16 00:28:13.692 iops : min= 34, max= 2048, avg=1152.00, stdev=678.88, samples=16 00:28:13.692 lat (usec) : 250=26.33%, 500=70.51%, 750=1.33%, 1000=0.01% 00:28:13.692 lat (msec) : 2=0.01%, 50=1.81%, >=2000=0.01% 00:28:13.692 cpu : usr=0.35%, sys=0.50%, ctx=18019, majf=0, minf=1 00:28:13.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.692 issued rwts: total=8800,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:13.692 00:28:13.692 Run status group 0 (all jobs): 00:28:13.692 READ: bw=587KiB/s (601kB/s), 587KiB/s-587KiB/s (601kB/s-601kB/s), io=34.4MiB (36.0MB), run=60012-60012msec 00:28:13.692 WRITE: bw=614KiB/s (629kB/s), 614KiB/s-614KiB/s (629kB/s-629kB/s), io=36.0MiB (37.7MB), run=60012-60012msec 00:28:13.692 00:28:13.692 Disk stats (read/write): 00:28:13.692 nvme0n1: ios=8896/9216, merge=0/0, ticks=16131/2351, in_queue=18482, util=99.64% 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:13.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:13.692 nvmf hotplug test: fio successful as expected 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.692 rmmod nvme_tcp 00:28:13.692 rmmod nvme_fabrics 00:28:13.692 rmmod nvme_keyring 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3032585 ']' 00:28:13.692 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3032585 00:28:13.692 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3032585 ']' 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3032585 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032585 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032585' 00:28:13.693 killing process with pid 3032585 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3032585 00:28:13.693 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3032585 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.693 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.124 00:28:15.124 real 1m10.208s 00:28:15.124 user 4m15.926s 00:28:15.124 sys 0m7.551s 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:15.124 ************************************ 00:28:15.124 END TEST nvmf_initiator_timeout 00:28:15.124 ************************************ 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.124 18:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.073 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:17.074 ************************************ 00:28:17.074 START TEST nvmf_perf_adq 00:28:17.074 ************************************ 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:17.074 * Looking for test storage... 00:28:17.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.074 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.333 --rc genhtml_branch_coverage=1 00:28:17.333 --rc genhtml_function_coverage=1 00:28:17.333 --rc genhtml_legend=1 00:28:17.333 --rc geninfo_all_blocks=1 00:28:17.333 --rc geninfo_unexecuted_blocks=1 00:28:17.333 00:28:17.333 ' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.333 --rc genhtml_branch_coverage=1 00:28:17.333 --rc genhtml_function_coverage=1 00:28:17.333 --rc genhtml_legend=1 00:28:17.333 --rc geninfo_all_blocks=1 00:28:17.333 --rc geninfo_unexecuted_blocks=1 00:28:17.333 00:28:17.333 ' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.333 --rc genhtml_branch_coverage=1 00:28:17.333 --rc genhtml_function_coverage=1 00:28:17.333 --rc genhtml_legend=1 00:28:17.333 --rc geninfo_all_blocks=1 00:28:17.333 --rc geninfo_unexecuted_blocks=1 00:28:17.333 00:28:17.333 ' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.333 --rc genhtml_branch_coverage=1 00:28:17.333 --rc genhtml_function_coverage=1 00:28:17.333 --rc genhtml_legend=1 00:28:17.333 --rc geninfo_all_blocks=1 00:28:17.333 --rc geninfo_unexecuted_blocks=1 00:28:17.333 00:28:17.333 ' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.333 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.334 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:19.862 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:20.121 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:22.649 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:27.918 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:27.918 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.919 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:27.920 00:28:27.920 --- 10.0.0.2 ping statistics --- 00:28:27.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.920 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:28:27.920 00:28:27.920 --- 10.0.0.1 ping statistics --- 00:28:27.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.920 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3044857 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3044857 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3044857 ']' 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.920 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.920 [2024-11-18 18:35:25.804460] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:27.920 [2024-11-18 18:35:25.804626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.920 [2024-11-18 18:35:25.960731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.920 [2024-11-18 18:35:26.104867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.920 [2024-11-18 18:35:26.104960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.920 [2024-11-18 18:35:26.104987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.920 [2024-11-18 18:35:26.105012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.920 [2024-11-18 18:35:26.105032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.920 [2024-11-18 18:35:26.108043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.920 [2024-11-18 18:35:26.108105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.920 [2024-11-18 18:35:26.108166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.920 [2024-11-18 18:35:26.108174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.486 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.744 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.001 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.001 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:29.001 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.002 [2024-11-18 18:35:27.205235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.002 Malloc1 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.002 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.002 [2024-11-18 18:35:27.334445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.259 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.259 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3045023 00:28:29.259 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:29.259 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:31.159 "tick_rate": 2700000000, 00:28:31.159 "poll_groups": [ 00:28:31.159 { 00:28:31.159 "name": "nvmf_tgt_poll_group_000", 00:28:31.159 "admin_qpairs": 1, 00:28:31.159 "io_qpairs": 1, 00:28:31.159 "current_admin_qpairs": 1, 00:28:31.159 "current_io_qpairs": 1, 00:28:31.159 "pending_bdev_io": 0, 00:28:31.159 "completed_nvme_io": 16785, 00:28:31.159 "transports": [ 00:28:31.159 { 00:28:31.159 "trtype": "TCP" 00:28:31.159 } 00:28:31.159 ] 00:28:31.159 }, 00:28:31.159 { 00:28:31.159 "name": "nvmf_tgt_poll_group_001", 00:28:31.159 "admin_qpairs": 0, 00:28:31.159 "io_qpairs": 1, 00:28:31.159 "current_admin_qpairs": 0, 00:28:31.159 "current_io_qpairs": 1, 00:28:31.159 "pending_bdev_io": 0, 00:28:31.159 "completed_nvme_io": 17329, 00:28:31.159 "transports": [ 00:28:31.159 { 00:28:31.159 "trtype": "TCP" 00:28:31.159 } 00:28:31.159 ] 00:28:31.159 }, 00:28:31.159 { 00:28:31.159 "name": "nvmf_tgt_poll_group_002", 00:28:31.159 "admin_qpairs": 0, 00:28:31.159 "io_qpairs": 1, 00:28:31.159 "current_admin_qpairs": 0, 00:28:31.159 "current_io_qpairs": 1, 00:28:31.159 "pending_bdev_io": 0, 00:28:31.159 "completed_nvme_io": 16429, 00:28:31.159 "transports": [ 00:28:31.159 { 00:28:31.159 "trtype": "TCP" 00:28:31.159 } 00:28:31.159 ] 00:28:31.159 }, 00:28:31.159 { 00:28:31.159 "name": "nvmf_tgt_poll_group_003", 00:28:31.159 "admin_qpairs": 0, 00:28:31.159 "io_qpairs": 1, 00:28:31.159 "current_admin_qpairs": 0, 00:28:31.159 "current_io_qpairs": 1, 00:28:31.159 "pending_bdev_io": 0, 00:28:31.159 "completed_nvme_io": 16914, 00:28:31.159 "transports": [ 00:28:31.159 { 00:28:31.159 "trtype": "TCP" 00:28:31.159 } 00:28:31.159 ] 00:28:31.159 } 00:28:31.159 ] 00:28:31.159 }' 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:31.159 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3045023 00:28:39.264 Initializing NVMe Controllers 00:28:39.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:39.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:39.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:39.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:39.264 Initialization complete. Launching workers. 00:28:39.264 ======================================================== 00:28:39.264 Latency(us) 00:28:39.264 Device Information : IOPS MiB/s Average min max 00:28:39.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8960.00 35.00 7143.37 3021.41 11489.37 00:28:39.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9347.40 36.51 6847.19 3091.52 13672.22 00:28:39.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8898.80 34.76 7194.48 3292.04 11657.62 00:28:39.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9141.90 35.71 7004.24 2835.78 11677.12 00:28:39.264 ======================================================== 00:28:39.264 Total : 36348.09 141.98 7044.72 2835.78 13672.22 00:28:39.264 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.264 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.264 rmmod nvme_tcp 00:28:39.264 rmmod nvme_fabrics 00:28:39.522 rmmod nvme_keyring 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3044857 ']' 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3044857 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3044857 ']' 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3044857 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044857 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044857' 00:28:39.522 killing process with pid 3044857 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3044857 00:28:39.522 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3044857 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.895 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.792 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.792 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:42.792 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:42.792 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:43.726 18:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:46.324 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.591 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:28:51.592 00:28:51.592 --- 10.0.0.2 ping statistics --- 00:28:51.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.592 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:28:51.592 00:28:51.592 --- 10.0.0.1 ping statistics --- 00:28:51.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.592 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:51.592 net.core.busy_poll = 1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:51.592 net.core.busy_read = 1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3047892 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3047892 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3047892 ']' 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.592 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.592 [2024-11-18 18:35:49.629507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:51.592 [2024-11-18 18:35:49.629714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.592 [2024-11-18 18:35:49.773519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:51.592 [2024-11-18 18:35:49.896965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.592 [2024-11-18 18:35:49.897050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.592 [2024-11-18 18:35:49.897073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.592 [2024-11-18 18:35:49.897093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.592 [2024-11-18 18:35:49.897110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.592 [2024-11-18 18:35:49.900080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.592 [2024-11-18 18:35:49.900125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.592 [2024-11-18 18:35:49.900164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.592 [2024-11-18 18:35:49.900185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.527 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.786 [2024-11-18 18:35:51.042240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.786 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.043 Malloc1 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.043 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.043 [2024-11-18 18:35:51.167398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.044 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.044 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3048051 00:28:53.044 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:53.044 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:54.944 "tick_rate": 2700000000, 00:28:54.944 "poll_groups": [ 00:28:54.944 { 00:28:54.944 "name": "nvmf_tgt_poll_group_000", 00:28:54.944 "admin_qpairs": 1, 00:28:54.944 "io_qpairs": 4, 00:28:54.944 "current_admin_qpairs": 1, 00:28:54.944 "current_io_qpairs": 4, 00:28:54.944 "pending_bdev_io": 0, 00:28:54.944 "completed_nvme_io": 23187, 00:28:54.944 "transports": [ 00:28:54.944 { 00:28:54.944 "trtype": "TCP" 00:28:54.944 } 00:28:54.944 ] 00:28:54.944 }, 00:28:54.944 { 00:28:54.944 "name": "nvmf_tgt_poll_group_001", 00:28:54.944 "admin_qpairs": 0, 00:28:54.944 "io_qpairs": 0, 00:28:54.944 "current_admin_qpairs": 0, 00:28:54.944 "current_io_qpairs": 0, 00:28:54.944 "pending_bdev_io": 0, 00:28:54.944 "completed_nvme_io": 0, 00:28:54.944 "transports": [ 00:28:54.944 { 00:28:54.944 "trtype": "TCP" 00:28:54.944 } 00:28:54.944 ] 00:28:54.944 }, 00:28:54.944 { 00:28:54.944 "name": "nvmf_tgt_poll_group_002", 00:28:54.944 "admin_qpairs": 0, 00:28:54.944 "io_qpairs": 0, 00:28:54.944 "current_admin_qpairs": 0, 00:28:54.944 "current_io_qpairs": 0, 00:28:54.944 "pending_bdev_io": 0, 00:28:54.944 "completed_nvme_io": 0, 00:28:54.944 "transports": [ 00:28:54.944 { 00:28:54.944 "trtype": "TCP" 00:28:54.944 } 00:28:54.944 ] 00:28:54.944 }, 00:28:54.944 { 00:28:54.944 "name": "nvmf_tgt_poll_group_003", 00:28:54.944 "admin_qpairs": 0, 00:28:54.944 "io_qpairs": 0, 00:28:54.944 "current_admin_qpairs": 0, 00:28:54.944 "current_io_qpairs": 0, 00:28:54.944 "pending_bdev_io": 0, 00:28:54.944 "completed_nvme_io": 0, 00:28:54.944 "transports": [ 00:28:54.944 { 00:28:54.944 "trtype": "TCP" 00:28:54.944 } 00:28:54.944 ] 00:28:54.944 } 00:28:54.944 ] 00:28:54.944 }' 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:54.944 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3048051 00:29:03.068 Initializing NVMe Controllers 00:29:03.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:03.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:03.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:03.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:03.068 Initialization complete. Launching workers. 00:29:03.068 ======================================================== 00:29:03.068 Latency(us) 00:29:03.068 Device Information : IOPS MiB/s Average min max 00:29:03.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3441.20 13.44 18623.48 3121.04 70555.31 00:29:03.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3030.50 11.84 21123.02 2691.07 70213.22 00:29:03.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3263.10 12.75 19633.89 2670.54 72194.95 00:29:03.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3021.10 11.80 21187.65 2605.97 70736.10 00:29:03.068 ======================================================== 00:29:03.068 Total : 12755.90 49.83 20083.08 2605.97 72194.95 00:29:03.068 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.326 rmmod nvme_tcp 00:29:03.326 rmmod nvme_fabrics 00:29:03.326 rmmod nvme_keyring 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3047892 ']' 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3047892 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3047892 ']' 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3047892 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047892 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047892' 00:29:03.326 killing process with pid 3047892 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3047892 00:29:03.326 18:36:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3047892 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.695 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:06.592 00:29:06.592 real 0m49.559s 00:29:06.592 user 2m55.951s 00:29:06.592 sys 0m8.810s 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:06.592 ************************************ 00:29:06.592 END TEST nvmf_perf_adq 00:29:06.592 ************************************ 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.592 18:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:06.851 ************************************ 00:29:06.851 START TEST nvmf_shutdown 00:29:06.851 ************************************ 00:29:06.851 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:06.851 * Looking for test storage... 00:29:06.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:06.851 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.851 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.851 18:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.851 --rc genhtml_branch_coverage=1 00:29:06.851 --rc genhtml_function_coverage=1 00:29:06.851 --rc genhtml_legend=1 00:29:06.851 --rc geninfo_all_blocks=1 00:29:06.851 --rc geninfo_unexecuted_blocks=1 00:29:06.851 00:29:06.851 ' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.851 --rc genhtml_branch_coverage=1 00:29:06.851 --rc genhtml_function_coverage=1 00:29:06.851 --rc genhtml_legend=1 00:29:06.851 --rc geninfo_all_blocks=1 00:29:06.851 --rc geninfo_unexecuted_blocks=1 00:29:06.851 00:29:06.851 ' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.851 --rc genhtml_branch_coverage=1 00:29:06.851 --rc genhtml_function_coverage=1 00:29:06.851 --rc genhtml_legend=1 00:29:06.851 --rc geninfo_all_blocks=1 00:29:06.851 --rc geninfo_unexecuted_blocks=1 00:29:06.851 00:29:06.851 ' 00:29:06.851 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.851 --rc genhtml_branch_coverage=1 00:29:06.852 --rc genhtml_function_coverage=1 00:29:06.852 --rc genhtml_legend=1 00:29:06.852 --rc geninfo_all_blocks=1 00:29:06.852 --rc geninfo_unexecuted_blocks=1 00:29:06.852 00:29:06.852 ' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:06.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:06.852 ************************************ 00:29:06.852 START TEST nvmf_shutdown_tc1 00:29:06.852 ************************************ 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.852 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.379 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:29:09.380 00:29:09.380 --- 10.0.0.2 ping statistics --- 00:29:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.380 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:29:09.380 00:29:09.380 --- 10.0.0.1 ping statistics --- 00:29:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.380 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.380 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3051618 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3051618 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3051618 ']' 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.381 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.381 [2024-11-18 18:36:07.516705] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:09.381 [2024-11-18 18:36:07.516925] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.381 [2024-11-18 18:36:07.668764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.639 [2024-11-18 18:36:07.805309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.639 [2024-11-18 18:36:07.805393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.639 [2024-11-18 18:36:07.805419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.639 [2024-11-18 18:36:07.805442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.639 [2024-11-18 18:36:07.805462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.639 [2024-11-18 18:36:07.808295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.639 [2024-11-18 18:36:07.808401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.639 [2024-11-18 18:36:07.808450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.639 [2024-11-18 18:36:07.808457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.204 [2024-11-18 18:36:08.512980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.204 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.463 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.463 Malloc1 00:29:10.463 [2024-11-18 18:36:08.664199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.463 Malloc2 00:29:10.721 Malloc3 00:29:10.721 Malloc4 00:29:10.721 Malloc5 00:29:10.979 Malloc6 00:29:10.979 Malloc7 00:29:11.237 Malloc8 00:29:11.237 Malloc9 00:29:11.237 Malloc10 00:29:11.237 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.237 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:11.237 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.237 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3052287 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3052287 /var/tmp/bdevperf.sock 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3052287 ']' 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.496 { 00:29:11.496 "params": { 00:29:11.496 "name": "Nvme$subsystem", 00:29:11.496 "trtype": "$TEST_TRANSPORT", 00:29:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.496 "adrfam": "ipv4", 00:29:11.496 "trsvcid": "$NVMF_PORT", 00:29:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.496 "hdgst": ${hdgst:-false}, 00:29:11.496 "ddgst": ${ddgst:-false} 00:29:11.496 }, 00:29:11.496 "method": "bdev_nvme_attach_controller" 00:29:11.496 } 00:29:11.496 EOF 00:29:11.496 )") 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.496 { 00:29:11.496 "params": { 00:29:11.496 "name": "Nvme$subsystem", 00:29:11.496 "trtype": "$TEST_TRANSPORT", 00:29:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.496 "adrfam": "ipv4", 00:29:11.496 "trsvcid": "$NVMF_PORT", 00:29:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.496 "hdgst": ${hdgst:-false}, 00:29:11.496 "ddgst": ${ddgst:-false} 00:29:11.496 }, 00:29:11.496 "method": "bdev_nvme_attach_controller" 00:29:11.496 } 00:29:11.496 EOF 00:29:11.496 )") 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.496 { 00:29:11.496 "params": { 00:29:11.496 "name": "Nvme$subsystem", 00:29:11.496 "trtype": "$TEST_TRANSPORT", 00:29:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.496 "adrfam": "ipv4", 00:29:11.496 "trsvcid": "$NVMF_PORT", 00:29:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.496 "hdgst": ${hdgst:-false}, 00:29:11.496 "ddgst": ${ddgst:-false} 00:29:11.496 }, 00:29:11.496 "method": "bdev_nvme_attach_controller" 00:29:11.496 } 00:29:11.496 EOF 00:29:11.496 )") 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.496 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.496 { 00:29:11.496 "params": { 00:29:11.496 "name": "Nvme$subsystem", 00:29:11.496 "trtype": "$TEST_TRANSPORT", 00:29:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.496 "adrfam": "ipv4", 00:29:11.496 "trsvcid": "$NVMF_PORT", 00:29:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.496 "hdgst": ${hdgst:-false}, 00:29:11.496 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.497 { 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme$subsystem", 00:29:11.497 "trtype": "$TEST_TRANSPORT", 00:29:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "$NVMF_PORT", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.497 "hdgst": ${hdgst:-false}, 00:29:11.497 "ddgst": ${ddgst:-false} 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 } 00:29:11.497 EOF 00:29:11.497 )") 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:11.497 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme1", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme2", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme3", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme4", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme5", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme6", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme7", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.497 "method": "bdev_nvme_attach_controller" 00:29:11.497 },{ 00:29:11.497 "params": { 00:29:11.497 "name": "Nvme8", 00:29:11.497 "trtype": "tcp", 00:29:11.497 "traddr": "10.0.0.2", 00:29:11.497 "adrfam": "ipv4", 00:29:11.497 "trsvcid": "4420", 00:29:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:11.497 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:11.497 "hdgst": false, 00:29:11.497 "ddgst": false 00:29:11.497 }, 00:29:11.498 "method": "bdev_nvme_attach_controller" 00:29:11.498 },{ 00:29:11.498 "params": { 00:29:11.498 "name": "Nvme9", 00:29:11.498 "trtype": "tcp", 00:29:11.498 "traddr": "10.0.0.2", 00:29:11.498 "adrfam": "ipv4", 00:29:11.498 "trsvcid": "4420", 00:29:11.498 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:11.498 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:11.498 "hdgst": false, 00:29:11.498 "ddgst": false 00:29:11.498 }, 00:29:11.498 "method": "bdev_nvme_attach_controller" 00:29:11.498 },{ 00:29:11.498 "params": { 00:29:11.498 "name": "Nvme10", 00:29:11.498 "trtype": "tcp", 00:29:11.498 "traddr": "10.0.0.2", 00:29:11.498 "adrfam": "ipv4", 00:29:11.498 "trsvcid": "4420", 00:29:11.498 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:11.498 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:11.498 "hdgst": false, 00:29:11.498 "ddgst": false 00:29:11.498 }, 00:29:11.498 "method": "bdev_nvme_attach_controller" 00:29:11.498 }' 00:29:11.498 [2024-11-18 18:36:09.674430] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:11.498 [2024-11-18 18:36:09.674578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:11.498 [2024-11-18 18:36:09.822049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.756 [2024-11-18 18:36:09.952215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3052287 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:13.658 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:14.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3052287 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3051618 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.593 { 00:29:14.593 "params": { 00:29:14.593 "name": "Nvme$subsystem", 00:29:14.593 "trtype": "$TEST_TRANSPORT", 00:29:14.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.593 "adrfam": "ipv4", 00:29:14.593 "trsvcid": "$NVMF_PORT", 00:29:14.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.593 "hdgst": ${hdgst:-false}, 00:29:14.593 "ddgst": ${ddgst:-false} 00:29:14.593 }, 00:29:14.593 "method": "bdev_nvme_attach_controller" 00:29:14.593 } 00:29:14.593 EOF 00:29:14.593 )") 00:29:14.593 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:14.594 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:14.594 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:14.594 18:36:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme1", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme2", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme3", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme4", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme5", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme6", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme7", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme8", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme9", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 },{ 00:29:14.594 "params": { 00:29:14.594 "name": "Nvme10", 00:29:14.594 "trtype": "tcp", 00:29:14.594 "traddr": "10.0.0.2", 00:29:14.594 "adrfam": "ipv4", 00:29:14.594 "trsvcid": "4420", 00:29:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:14.594 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:14.594 "hdgst": false, 00:29:14.594 "ddgst": false 00:29:14.594 }, 00:29:14.594 "method": "bdev_nvme_attach_controller" 00:29:14.594 }' 00:29:14.594 [2024-11-18 18:36:12.723825] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:14.594 [2024-11-18 18:36:12.723992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052702 ] 00:29:14.594 [2024-11-18 18:36:12.866210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.853 [2024-11-18 18:36:12.996395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.753 Running I/O for 1 seconds... 00:29:17.947 1425.00 IOPS, 89.06 MiB/s 00:29:17.947 Latency(us) 00:29:17.947 [2024-11-18T17:36:16.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.947 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme1n1 : 1.13 185.21 11.58 0.00 0.00 319239.87 23981.32 292047.83 00:29:17.947 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme2n1 : 1.21 212.22 13.26 0.00 0.00 292450.23 36700.16 284280.60 00:29:17.947 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme3n1 : 1.20 212.57 13.29 0.00 0.00 288105.24 21748.24 301368.51 00:29:17.947 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme4n1 : 1.22 210.54 13.16 0.00 0.00 286087.59 22622.06 302921.96 00:29:17.947 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme5n1 : 1.15 166.56 10.41 0.00 0.00 353761.15 24272.59 307582.29 00:29:17.947 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme6n1 : 1.15 171.89 10.74 0.00 0.00 334116.33 2961.26 306028.85 00:29:17.947 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme7n1 : 1.22 209.53 13.10 0.00 0.00 272662.38 23690.05 301368.51 00:29:17.947 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme8n1 : 1.23 208.72 13.04 0.00 0.00 267635.29 16699.54 306028.85 00:29:17.947 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme9n1 : 1.19 160.99 10.06 0.00 0.00 340960.65 24466.77 343311.55 00:29:17.947 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.947 Verification LBA range: start 0x0 length 0x400 00:29:17.947 Nvme10n1 : 1.24 207.26 12.95 0.00 0.00 261434.97 22039.51 312242.63 00:29:17.947 [2024-11-18T17:36:16.284Z] =================================================================================================================== 00:29:17.947 [2024-11-18T17:36:16.284Z] Total : 1945.48 121.59 0.00 0.00 297950.48 2961.26 343311.55 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.883 rmmod nvme_tcp 00:29:18.883 rmmod nvme_fabrics 00:29:18.883 rmmod nvme_keyring 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:18.883 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3051618 ']' 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3051618 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3051618 ']' 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3051618 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051618 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051618' 00:29:18.884 killing process with pid 3051618 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3051618 00:29:18.884 18:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3051618 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.166 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.067 00:29:24.067 real 0m16.770s 00:29:24.067 user 0m52.861s 00:29:24.067 sys 0m3.900s 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:24.067 ************************************ 00:29:24.067 END TEST nvmf_shutdown_tc1 00:29:24.067 ************************************ 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.067 ************************************ 00:29:24.067 START TEST nvmf_shutdown_tc2 00:29:24.067 ************************************ 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.067 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.068 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:29:24.068 00:29:24.068 --- 10.0.0.2 ping statistics --- 00:29:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.068 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:24.068 00:29:24.068 --- 10.0.0.1 ping statistics --- 00:29:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.068 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3053864 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3053864 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3053864 ']' 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.068 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.068 [2024-11-18 18:36:22.226388] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:24.068 [2024-11-18 18:36:22.226529] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.068 [2024-11-18 18:36:22.377952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.327 [2024-11-18 18:36:22.503456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.327 [2024-11-18 18:36:22.503542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.327 [2024-11-18 18:36:22.503564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.327 [2024-11-18 18:36:22.503585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.327 [2024-11-18 18:36:22.503627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.327 [2024-11-18 18:36:22.506435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.327 [2024-11-18 18:36:22.506497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.327 [2024-11-18 18:36:22.506542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.327 [2024-11-18 18:36:22.506549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.892 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.893 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.893 [2024-11-18 18:36:23.212985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.151 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.151 Malloc1 00:29:25.151 [2024-11-18 18:36:23.367661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.151 Malloc2 00:29:25.409 Malloc3 00:29:25.409 Malloc4 00:29:25.409 Malloc5 00:29:25.667 Malloc6 00:29:25.667 Malloc7 00:29:25.926 Malloc8 00:29:25.926 Malloc9 00:29:25.926 Malloc10 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3054176 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3054176 /var/tmp/bdevperf.sock 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3054176 ']' 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.184 { 00:29:26.184 "params": { 00:29:26.184 "name": "Nvme$subsystem", 00:29:26.184 "trtype": "$TEST_TRANSPORT", 00:29:26.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.184 "adrfam": "ipv4", 00:29:26.184 "trsvcid": "$NVMF_PORT", 00:29:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.184 "hdgst": ${hdgst:-false}, 00:29:26.184 "ddgst": ${ddgst:-false} 00:29:26.184 }, 00:29:26.184 "method": "bdev_nvme_attach_controller" 00:29:26.184 } 00:29:26.184 EOF 00:29:26.184 )") 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.184 { 00:29:26.184 "params": { 00:29:26.184 "name": "Nvme$subsystem", 00:29:26.184 "trtype": "$TEST_TRANSPORT", 00:29:26.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.184 "adrfam": "ipv4", 00:29:26.184 "trsvcid": "$NVMF_PORT", 00:29:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.184 "hdgst": ${hdgst:-false}, 00:29:26.184 "ddgst": ${ddgst:-false} 00:29:26.184 }, 00:29:26.184 "method": "bdev_nvme_attach_controller" 00:29:26.184 } 00:29:26.184 EOF 00:29:26.184 )") 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.184 { 00:29:26.184 "params": { 00:29:26.184 "name": "Nvme$subsystem", 00:29:26.184 "trtype": "$TEST_TRANSPORT", 00:29:26.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.184 "adrfam": "ipv4", 00:29:26.184 "trsvcid": "$NVMF_PORT", 00:29:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.184 "hdgst": ${hdgst:-false}, 00:29:26.184 "ddgst": ${ddgst:-false} 00:29:26.184 }, 00:29:26.184 "method": "bdev_nvme_attach_controller" 00:29:26.184 } 00:29:26.184 EOF 00:29:26.184 )") 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.184 { 00:29:26.184 "params": { 00:29:26.184 "name": "Nvme$subsystem", 00:29:26.184 "trtype": "$TEST_TRANSPORT", 00:29:26.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.184 "adrfam": "ipv4", 00:29:26.184 "trsvcid": "$NVMF_PORT", 00:29:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.184 "hdgst": ${hdgst:-false}, 00:29:26.184 "ddgst": ${ddgst:-false} 00:29:26.184 }, 00:29:26.184 "method": "bdev_nvme_attach_controller" 00:29:26.184 } 00:29:26.184 EOF 00:29:26.184 )") 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.184 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.185 { 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme$subsystem", 00:29:26.185 "trtype": "$TEST_TRANSPORT", 00:29:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "$NVMF_PORT", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.185 "hdgst": ${hdgst:-false}, 00:29:26.185 "ddgst": ${ddgst:-false} 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 } 00:29:26.185 EOF 00:29:26.185 )") 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:26.185 18:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme1", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.185 "hdgst": false, 00:29:26.185 "ddgst": false 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 },{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme2", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:26.185 "hdgst": false, 00:29:26.185 "ddgst": false 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 },{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme3", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:26.185 "hdgst": false, 00:29:26.185 "ddgst": false 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 },{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme4", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:26.185 "hdgst": false, 00:29:26.185 "ddgst": false 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 },{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme5", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:26.185 "hdgst": false, 00:29:26.185 "ddgst": false 00:29:26.185 }, 00:29:26.185 "method": "bdev_nvme_attach_controller" 00:29:26.185 },{ 00:29:26.185 "params": { 00:29:26.185 "name": "Nvme6", 00:29:26.185 "trtype": "tcp", 00:29:26.185 "traddr": "10.0.0.2", 00:29:26.185 "adrfam": "ipv4", 00:29:26.185 "trsvcid": "4420", 00:29:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:26.185 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:26.186 "hdgst": false, 00:29:26.186 "ddgst": false 00:29:26.186 }, 00:29:26.186 "method": "bdev_nvme_attach_controller" 00:29:26.186 },{ 00:29:26.186 "params": { 00:29:26.186 "name": "Nvme7", 00:29:26.186 "trtype": "tcp", 00:29:26.186 "traddr": "10.0.0.2", 00:29:26.186 "adrfam": "ipv4", 00:29:26.186 "trsvcid": "4420", 00:29:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:26.186 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:26.186 "hdgst": false, 00:29:26.186 "ddgst": false 00:29:26.186 }, 00:29:26.186 "method": "bdev_nvme_attach_controller" 00:29:26.186 },{ 00:29:26.186 "params": { 00:29:26.186 "name": "Nvme8", 00:29:26.186 "trtype": "tcp", 00:29:26.186 "traddr": "10.0.0.2", 00:29:26.186 "adrfam": "ipv4", 00:29:26.186 "trsvcid": "4420", 00:29:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:26.186 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:26.186 "hdgst": false, 00:29:26.186 "ddgst": false 00:29:26.186 }, 00:29:26.186 "method": "bdev_nvme_attach_controller" 00:29:26.186 },{ 00:29:26.186 "params": { 00:29:26.186 "name": "Nvme9", 00:29:26.186 "trtype": "tcp", 00:29:26.186 "traddr": "10.0.0.2", 00:29:26.186 "adrfam": "ipv4", 00:29:26.186 "trsvcid": "4420", 00:29:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:26.186 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:26.186 "hdgst": false, 00:29:26.186 "ddgst": false 00:29:26.186 }, 00:29:26.186 "method": "bdev_nvme_attach_controller" 00:29:26.186 },{ 00:29:26.186 "params": { 00:29:26.186 "name": "Nvme10", 00:29:26.186 "trtype": "tcp", 00:29:26.186 "traddr": "10.0.0.2", 00:29:26.186 "adrfam": "ipv4", 00:29:26.186 "trsvcid": "4420", 00:29:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:26.186 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:26.186 "hdgst": false, 00:29:26.186 "ddgst": false 00:29:26.186 }, 00:29:26.186 "method": "bdev_nvme_attach_controller" 00:29:26.186 }' 00:29:26.186 [2024-11-18 18:36:24.376051] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:26.186 [2024-11-18 18:36:24.376197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054176 ] 00:29:26.186 [2024-11-18 18:36:24.517063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.444 [2024-11-18 18:36:24.646884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.971 Running I/O for 10 seconds... 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.971 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:28.972 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:29.229 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:29.487 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3054176 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3054176 ']' 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3054176 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054176 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054176' 00:29:29.745 killing process with pid 3054176 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3054176 00:29:29.745 18:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3054176 00:29:29.745 1669.00 IOPS, 104.31 MiB/s [2024-11-18T17:36:28.082Z] Received shutdown signal, test time was about 1.049002 seconds 00:29:29.745 00:29:29.745 Latency(us) 00:29:29.745 [2024-11-18T17:36:28.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.745 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme1n1 : 1.00 192.53 12.03 0.00 0.00 328029.42 20971.52 307582.29 00:29:29.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme2n1 : 0.99 198.66 12.42 0.00 0.00 307139.97 15243.19 309135.74 00:29:29.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme3n1 : 0.96 199.33 12.46 0.00 0.00 303792.73 40583.77 293601.28 00:29:29.745 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme4n1 : 0.97 198.60 12.41 0.00 0.00 298411.30 19223.89 312242.63 00:29:29.745 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme5n1 : 1.05 183.18 11.45 0.00 0.00 318732.77 24175.50 361952.90 00:29:29.745 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme6n1 : 1.04 184.64 11.54 0.00 0.00 309636.87 30098.01 338651.21 00:29:29.745 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme7n1 : 0.98 195.52 12.22 0.00 0.00 283871.76 23495.87 282727.16 00:29:29.745 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme8n1 : 0.99 194.82 12.18 0.00 0.00 278468.33 32039.82 299815.06 00:29:29.745 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme9n1 : 1.03 186.18 11.64 0.00 0.00 287222.64 24466.77 307582.29 00:29:29.745 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.745 Verification LBA range: start 0x0 length 0x400 00:29:29.745 Nvme10n1 : 1.03 187.05 11.69 0.00 0.00 279051.38 43302.31 288940.94 00:29:29.745 [2024-11-18T17:36:28.082Z] =================================================================================================================== 00:29:29.745 [2024-11-18T17:36:28.082Z] Total : 1920.50 120.03 0.00 0.00 299455.73 15243.19 361952.90 00:29:30.679 18:36:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3053864 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.052 18:36:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.052 rmmod nvme_tcp 00:29:32.052 rmmod nvme_fabrics 00:29:32.052 rmmod nvme_keyring 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3053864 ']' 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3053864 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3053864 ']' 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3053864 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053864 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053864' 00:29:32.052 killing process with pid 3053864 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3053864 00:29:32.052 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3053864 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.581 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.112 00:29:37.112 real 0m12.894s 00:29:37.112 user 0m44.287s 00:29:37.112 sys 0m2.022s 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.112 ************************************ 00:29:37.112 END TEST nvmf_shutdown_tc2 00:29:37.112 ************************************ 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:37.112 ************************************ 00:29:37.112 START TEST nvmf_shutdown_tc3 00:29:37.112 ************************************ 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:37.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:37.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.112 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:37.113 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:37.113 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.113 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:29:37.113 00:29:37.113 --- 10.0.0.2 ping statistics --- 00:29:37.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.113 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:37.113 00:29:37.113 --- 10.0.0.1 ping statistics --- 00:29:37.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.113 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3055603 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3055603 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3055603 ']' 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.113 18:36:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.113 [2024-11-18 18:36:35.147232] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:37.113 [2024-11-18 18:36:35.147366] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.113 [2024-11-18 18:36:35.295734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.113 [2024-11-18 18:36:35.438695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.113 [2024-11-18 18:36:35.438765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.113 [2024-11-18 18:36:35.438787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.113 [2024-11-18 18:36:35.438807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.113 [2024-11-18 18:36:35.438824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.113 [2024-11-18 18:36:35.441746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.113 [2024-11-18 18:36:35.441803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.113 [2024-11-18 18:36:35.441853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.113 [2024-11-18 18:36:35.441859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.048 [2024-11-18 18:36:36.180315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.048 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.048 Malloc1 00:29:38.048 [2024-11-18 18:36:36.318197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.306 Malloc2 00:29:38.306 Malloc3 00:29:38.306 Malloc4 00:29:38.586 Malloc5 00:29:38.586 Malloc6 00:29:38.586 Malloc7 00:29:38.843 Malloc8 00:29:38.843 Malloc9 00:29:39.102 Malloc10 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3055915 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3055915 /var/tmp/bdevperf.sock 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3055915 ']' 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:39.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.102 { 00:29:39.102 "params": { 00:29:39.102 "name": "Nvme$subsystem", 00:29:39.102 "trtype": "$TEST_TRANSPORT", 00:29:39.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.102 "adrfam": "ipv4", 00:29:39.102 "trsvcid": "$NVMF_PORT", 00:29:39.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.102 "hdgst": ${hdgst:-false}, 00:29:39.102 "ddgst": ${ddgst:-false} 00:29:39.102 }, 00:29:39.102 "method": "bdev_nvme_attach_controller" 00:29:39.102 } 00:29:39.102 EOF 00:29:39.102 )") 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.102 { 00:29:39.102 "params": { 00:29:39.102 "name": "Nvme$subsystem", 00:29:39.102 "trtype": "$TEST_TRANSPORT", 00:29:39.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.102 "adrfam": "ipv4", 00:29:39.102 "trsvcid": "$NVMF_PORT", 00:29:39.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.102 "hdgst": ${hdgst:-false}, 00:29:39.102 "ddgst": ${ddgst:-false} 00:29:39.102 }, 00:29:39.102 "method": "bdev_nvme_attach_controller" 00:29:39.102 } 00:29:39.102 EOF 00:29:39.102 )") 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.102 { 00:29:39.102 "params": { 00:29:39.102 "name": "Nvme$subsystem", 00:29:39.102 "trtype": "$TEST_TRANSPORT", 00:29:39.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.102 "adrfam": "ipv4", 00:29:39.102 "trsvcid": "$NVMF_PORT", 00:29:39.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.102 "hdgst": ${hdgst:-false}, 00:29:39.102 "ddgst": ${ddgst:-false} 00:29:39.102 }, 00:29:39.102 "method": "bdev_nvme_attach_controller" 00:29:39.102 } 00:29:39.102 EOF 00:29:39.102 )") 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.102 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.102 { 00:29:39.102 "params": { 00:29:39.102 "name": "Nvme$subsystem", 00:29:39.102 "trtype": "$TEST_TRANSPORT", 00:29:39.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.102 "adrfam": "ipv4", 00:29:39.102 "trsvcid": "$NVMF_PORT", 00:29:39.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.102 "hdgst": ${hdgst:-false}, 00:29:39.102 "ddgst": ${ddgst:-false} 00:29:39.102 }, 00:29:39.102 "method": "bdev_nvme_attach_controller" 00:29:39.102 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.103 { 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme$subsystem", 00:29:39.103 "trtype": "$TEST_TRANSPORT", 00:29:39.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "$NVMF_PORT", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.103 "hdgst": ${hdgst:-false}, 00:29:39.103 "ddgst": ${ddgst:-false} 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 } 00:29:39.103 EOF 00:29:39.103 )") 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:39.103 18:36:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme1", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme2", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme3", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme4", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme5", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme6", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme7", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:39.103 "hdgst": false, 00:29:39.103 "ddgst": false 00:29:39.103 }, 00:29:39.103 "method": "bdev_nvme_attach_controller" 00:29:39.103 },{ 00:29:39.103 "params": { 00:29:39.103 "name": "Nvme8", 00:29:39.103 "trtype": "tcp", 00:29:39.103 "traddr": "10.0.0.2", 00:29:39.103 "adrfam": "ipv4", 00:29:39.103 "trsvcid": "4420", 00:29:39.103 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:39.103 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:39.104 "hdgst": false, 00:29:39.104 "ddgst": false 00:29:39.104 }, 00:29:39.104 "method": "bdev_nvme_attach_controller" 00:29:39.104 },{ 00:29:39.104 "params": { 00:29:39.104 "name": "Nvme9", 00:29:39.104 "trtype": "tcp", 00:29:39.104 "traddr": "10.0.0.2", 00:29:39.104 "adrfam": "ipv4", 00:29:39.104 "trsvcid": "4420", 00:29:39.104 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:39.104 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:39.104 "hdgst": false, 00:29:39.104 "ddgst": false 00:29:39.104 }, 00:29:39.104 "method": "bdev_nvme_attach_controller" 00:29:39.104 },{ 00:29:39.104 "params": { 00:29:39.104 "name": "Nvme10", 00:29:39.104 "trtype": "tcp", 00:29:39.104 "traddr": "10.0.0.2", 00:29:39.104 "adrfam": "ipv4", 00:29:39.104 "trsvcid": "4420", 00:29:39.104 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:39.104 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:39.104 "hdgst": false, 00:29:39.104 "ddgst": false 00:29:39.104 }, 00:29:39.104 "method": "bdev_nvme_attach_controller" 00:29:39.104 }' 00:29:39.104 [2024-11-18 18:36:37.347362] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:39.104 [2024-11-18 18:36:37.347507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055915 ] 00:29:39.362 [2024-11-18 18:36:37.484271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.362 [2024-11-18 18:36:37.612529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.260 Running I/O for 10 seconds... 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.825 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.083 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.083 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:42.083 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:42.083 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3055603 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3055603 ']' 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3055603 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055603 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055603' 00:29:42.377 killing process with pid 3055603 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3055603 00:29:42.377 18:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3055603 00:29:42.377 [2024-11-18 18:36:40.518638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.377 [2024-11-18 18:36:40.518723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.518999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.519951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.378 [2024-11-18 18:36:40.525501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.525992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.526651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.529613] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.379 [2024-11-18 18:36:40.530445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:42.379 [2024-11-18 18:36:40.530830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.530974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.530997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.379 [2024-11-18 18:36:40.531020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.379 [2024-11-18 18:36:40.531042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.380 [2024-11-18 18:36:40.531063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.531995] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.380 [2024-11-18 18:36:40.537495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.537994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.538784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.380 [2024-11-18 18:36:40.544573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.544993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.545582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.381 [2024-11-18 18:36:40.547775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.547982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.548587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.382 [2024-11-18 18:36:40.551310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.551995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.552317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.383 [2024-11-18 18:36:40.555765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.555991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.556662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.558898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.558936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.558957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.558976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.558995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.384 [2024-11-18 18:36:40.559404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-18 18:36:40.559507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:42.385 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 18:36:40.559706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-11-18 18:36:40.559749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 18:36:40.559828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.559958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.559976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.559992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 18:36:40.559993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 18:36:40.560090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:42.385 [2024-11-18 18:36:40.560180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.385 [2024-11-18 18:36:40.560396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.385 [2024-11-18 18:36:40.560421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.560967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.560989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.561966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.386 [2024-11-18 18:36:40.561989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.386 [2024-11-18 18:36:40.562014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.387 [2024-11-18 18:36:40.562809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.562885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.387 1410.00 IOPS, 88.12 MiB/s [2024-11-18T17:36:40.724Z] [2024-11-18 18:36:40.592446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:42.387 [2024-11-18 18:36:40.592656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.592693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.592720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.592743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.592766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.592789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.592812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.592853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.592882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:42.387 [2024-11-18 18:36:40.592960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.592990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:42.387 [2024-11-18 18:36:40.593192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:42.387 [2024-11-18 18:36:40.593269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:42.387 [2024-11-18 18:36:40.593523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.387 [2024-11-18 18:36:40.593656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.387 [2024-11-18 18:36:40.593684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.593707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.593728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:42.388 [2024-11-18 18:36:40.593799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.593829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.593853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.593875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.593898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.593920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.593943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.593965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.593986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:42.388 [2024-11-18 18:36:40.594055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:42.388 [2024-11-18 18:36:40.594308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:42.388 [2024-11-18 18:36:40.594558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.388 [2024-11-18 18:36:40.594731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.594752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:42.388 [2024-11-18 18:36:40.596792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.596830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.596897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.596925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.596948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.596974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.596998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.388 [2024-11-18 18:36:40.597309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.388 [2024-11-18 18:36:40.597334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.597963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.598975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.598997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.389 [2024-11-18 18:36:40.599026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.389 [2024-11-18 18:36:40.599049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.599964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.599986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.600010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.600032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.600373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:42.390 [2024-11-18 18:36:40.600435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:42.390 [2024-11-18 18:36:40.603315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.390 [2024-11-18 18:36:40.603547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:42.390 [2024-11-18 18:36:40.603572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:42.390 [2024-11-18 18:36:40.603630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.603896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:42.390 [2024-11-18 18:36:40.604049] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.390 [2024-11-18 18:36:40.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.390 [2024-11-18 18:36:40.604888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.390 [2024-11-18 18:36:40.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.604953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.604979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.605992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.391 [2024-11-18 18:36:40.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.391 [2024-11-18 18:36:40.606689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.606998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.607547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.607569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:42.392 [2024-11-18 18:36:40.607967] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.392 [2024-11-18 18:36:40.608088] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.392 [2024-11-18 18:36:40.608184] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.392 [2024-11-18 18:36:40.608282] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:42.392 [2024-11-18 18:36:40.608893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:42.392 [2024-11-18 18:36:40.609016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.392 [2024-11-18 18:36:40.609718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.392 [2024-11-18 18:36:40.609744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.609766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.609792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.609815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.609840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.609863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.609888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.609926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.609953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.609975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.609999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.610956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.610981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.611003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.611028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.611075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.393 [2024-11-18 18:36:40.611097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.393 [2024-11-18 18:36:40.611121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.611951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.611974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.612251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.612273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:42.394 [2024-11-18 18:36:40.615402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.394 [2024-11-18 18:36:40.615939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.394 [2024-11-18 18:36:40.615965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.616959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.616990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.395 [2024-11-18 18:36:40.617602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.395 [2024-11-18 18:36:40.617633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.617971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.618588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.618633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:42.396 [2024-11-18 18:36:40.620212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:42.396 [2024-11-18 18:36:40.620247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:42.396 [2024-11-18 18:36:40.620274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:42.396 [2024-11-18 18:36:40.620491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.396 [2024-11-18 18:36:40.620528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:42.396 [2024-11-18 18:36:40.620551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:42.396 [2024-11-18 18:36:40.620576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:42.396 [2024-11-18 18:36:40.620598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:42.396 [2024-11-18 18:36:40.620651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:42.396 [2024-11-18 18:36:40.620684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:42.396 [2024-11-18 18:36:40.620812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:42.396 [2024-11-18 18:36:40.621146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.396 [2024-11-18 18:36:40.621183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:42.396 [2024-11-18 18:36:40.621206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:42.396 [2024-11-18 18:36:40.621332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.396 [2024-11-18 18:36:40.621365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:42.396 [2024-11-18 18:36:40.621388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:42.396 [2024-11-18 18:36:40.621519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.396 [2024-11-18 18:36:40.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:42.396 [2024-11-18 18:36:40.621574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:42.396 [2024-11-18 18:36:40.622147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.622177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.622210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.622233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.622258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.622280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.622304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.622327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.622352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.396 [2024-11-18 18:36:40.622374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.396 [2024-11-18 18:36:40.622398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.622956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.622979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.623959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.623981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.624005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.624027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.624051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.624073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.397 [2024-11-18 18:36:40.624096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.397 [2024-11-18 18:36:40.624118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.624954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.624975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.625348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.625370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:42.398 [2024-11-18 18:36:40.627387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.627418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.398 [2024-11-18 18:36:40.627449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.398 [2024-11-18 18:36:40.627472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.627962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.627984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.628976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.629002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.399 [2024-11-18 18:36:40.629024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.399 [2024-11-18 18:36:40.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.629954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.629979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.630577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.630600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:42.400 [2024-11-18 18:36:40.632204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.400 [2024-11-18 18:36:40.632237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.400 [2024-11-18 18:36:40.632272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.632965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.632988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.633956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.401 [2024-11-18 18:36:40.633982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.401 [2024-11-18 18:36:40.634004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.634963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.634991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.635459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.635482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:42.402 [2024-11-18 18:36:40.637137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.637207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.637243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.637271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.402 [2024-11-18 18:36:40.637295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.402 [2024-11-18 18:36:40.637321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.637991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.638966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.638992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.639014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.639041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.639063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.403 [2024-11-18 18:36:40.639089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.403 [2024-11-18 18:36:40.639112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.639966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.639991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.640359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.640382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:42.404 [2024-11-18 18:36:40.642024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.642056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.642093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.642118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.642143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.404 [2024-11-18 18:36:40.642168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.404 [2024-11-18 18:36:40.642194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.642968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.642990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.405 [2024-11-18 18:36:40.643888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.405 [2024-11-18 18:36:40.643914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.643942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.643968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.643991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.644967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.644989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.645036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.645084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.645132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.645183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.406 [2024-11-18 18:36:40.645233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.406 [2024-11-18 18:36:40.645256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:42.406 [2024-11-18 18:36:40.650543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:42.406 [2024-11-18 18:36:40.650596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:42.406 [2024-11-18 18:36:40.650695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:42.406 [2024-11-18 18:36:40.650730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:42.406 [2024-11-18 18:36:40.650851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:42.406 [2024-11-18 18:36:40.650893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:42.406 [2024-11-18 18:36:40.650939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:42.406 [2024-11-18 18:36:40.650966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:42.406 [2024-11-18 18:36:40.650987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:42.406 [2024-11-18 18:36:40.651012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:42.406 [2024-11-18 18:36:40.651037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:42.406 [2024-11-18 18:36:40.651113] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:42.406 [2024-11-18 18:36:40.651147] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:42.406 [2024-11-18 18:36:40.651189] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:42.407 [2024-11-18 18:36:40.651220] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:42.407 [2024-11-18 18:36:40.651262] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:42.407 [2024-11-18 18:36:40.651770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:42.407 task offset: 24448 on job bdev=Nvme3n1 fails 00:29:42.407 00:29:42.407 Latency(us) 00:29:42.407 [2024-11-18T17:36:40.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.407 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme1n1 ended in about 1.08 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme1n1 : 1.08 123.60 7.73 59.48 0.00 346070.00 23884.23 310689.19 00:29:42.407 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme2n1 ended in about 1.09 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme2n1 : 1.09 117.53 7.35 58.76 0.00 352729.32 22816.24 306028.85 00:29:42.407 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme3n1 ended in about 1.06 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme3n1 : 1.06 180.41 11.28 60.45 0.00 252972.96 20097.71 293601.28 00:29:42.407 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme4n1 ended in about 1.08 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme4n1 : 1.08 123.42 7.71 59.39 0.00 327199.97 28738.75 324670.20 00:29:42.407 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme5n1 ended in about 1.09 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme5n1 : 1.09 116.97 7.31 58.48 0.00 334690.73 37088.52 333990.87 00:29:42.407 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme6n1 ended in about 1.10 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme6n1 : 1.10 120.09 7.51 58.22 0.00 323072.42 23204.60 346418.44 00:29:42.407 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme7n1 ended in about 1.10 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme7n1 : 1.10 115.93 7.25 57.97 0.00 324810.27 43884.85 304475.40 00:29:42.407 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme8n1 ended in about 1.11 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme8n1 : 1.11 177.65 11.10 57.71 0.00 235180.78 21359.88 306028.85 00:29:42.407 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme9n1 ended in about 1.06 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme9n1 : 1.06 180.34 11.27 60.11 0.00 223848.20 10631.40 284280.60 00:29:42.407 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:42.407 Job: Nvme10n1 ended in about 1.08 seconds with error 00:29:42.407 Verification LBA range: start 0x0 length 0x400 00:29:42.407 Nvme10n1 : 1.08 133.04 8.31 59.13 0.00 274991.20 41943.04 332437.43 00:29:42.407 [2024-11-18T17:36:40.744Z] =================================================================================================================== 00:29:42.407 [2024-11-18T17:36:40.744Z] Total : 1388.96 86.81 589.71 0.00 293894.78 10631.40 346418.44 00:29:42.691 [2024-11-18 18:36:40.740246] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:42.691 [2024-11-18 18:36:40.740382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:42.691 [2024-11-18 18:36:40.740822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.691 [2024-11-18 18:36:40.740873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:42.691 [2024-11-18 18:36:40.740904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:42.691 [2024-11-18 18:36:40.741028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.691 [2024-11-18 18:36:40.741063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:42.691 [2024-11-18 18:36:40.741096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.741199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.741233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.741257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.741365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.741409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.741434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.741459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.741480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.741506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.741534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.741560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.741580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.741600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.741629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.741664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.741683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.741704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.741723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.744434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:42.692 [2024-11-18 18:36:40.744704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.744747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.744773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.744892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.744927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.744952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.744991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.745031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.745060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.745089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.745198] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:42.692 [2024-11-18 18:36:40.745237] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:42.692 [2024-11-18 18:36:40.745267] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:42.692 [2024-11-18 18:36:40.745307] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:42.692 [2024-11-18 18:36:40.746189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.746230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.746255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.746292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.746322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.746358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.746380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.746404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.746428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.746454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.746474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.746493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.746513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.746535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.746554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.746575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.746594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.746625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.746646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.746667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.746687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.746830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:42.692 [2024-11-18 18:36:40.746867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:42.692 [2024-11-18 18:36:40.746894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:42.692 [2024-11-18 18:36:40.746974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.747004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.747025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.747046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.747071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.747095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.747115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.747135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.747154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.747323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.747360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.747384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.747506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.747540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.747564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.747699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.692 [2024-11-18 18:36:40.747733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:42.692 [2024-11-18 18:36:40.747757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:42.692 [2024-11-18 18:36:40.747780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.747800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.747821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.747843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.747913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.747949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.747980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:42.692 [2024-11-18 18:36:40.748046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.748073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.748094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.748115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.748138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.748158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.748177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.748202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:42.692 [2024-11-18 18:36:40.748227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:42.692 [2024-11-18 18:36:40.748247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:42.692 [2024-11-18 18:36:40.748267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:42.692 [2024-11-18 18:36:40.748287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:45.241 18:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3055915 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3055915 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3055915 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.176 rmmod nvme_tcp 00:29:46.176 rmmod nvme_fabrics 00:29:46.176 rmmod nvme_keyring 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3055603 ']' 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3055603 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3055603 ']' 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3055603 00:29:46.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3055603) - No such process 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3055603 is not found' 00:29:46.176 Process with pid 3055603 is not found 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.176 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.705 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.706 00:29:48.706 real 0m11.554s 00:29:48.706 user 0m34.314s 00:29:48.706 sys 0m2.007s 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.706 ************************************ 00:29:48.706 END TEST nvmf_shutdown_tc3 00:29:48.706 ************************************ 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.706 ************************************ 00:29:48.706 START TEST nvmf_shutdown_tc4 00:29:48.706 ************************************ 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.706 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:29:48.707 00:29:48.707 --- 10.0.0.2 ping statistics --- 00:29:48.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.707 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:48.707 00:29:48.707 --- 10.0.0.1 ping statistics --- 00:29:48.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.707 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.707 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3057091 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3057091 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3057091 ']' 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.708 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.708 [2024-11-18 18:36:46.785295] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:48.708 [2024-11-18 18:36:46.785441] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.708 [2024-11-18 18:36:46.942831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.965 [2024-11-18 18:36:47.086074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.965 [2024-11-18 18:36:47.086155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.965 [2024-11-18 18:36:47.086181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.965 [2024-11-18 18:36:47.086207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.965 [2024-11-18 18:36:47.086229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.965 [2024-11-18 18:36:47.089121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.965 [2024-11-18 18:36:47.089222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.965 [2024-11-18 18:36:47.089283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.965 [2024-11-18 18:36:47.089290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:49.531 [2024-11-18 18:36:47.824155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.531 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.789 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:49.789 Malloc1 00:29:49.789 [2024-11-18 18:36:47.973855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.789 Malloc2 00:29:50.047 Malloc3 00:29:50.047 Malloc4 00:29:50.047 Malloc5 00:29:50.305 Malloc6 00:29:50.305 Malloc7 00:29:50.562 Malloc8 00:29:50.562 Malloc9 00:29:50.562 Malloc10 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3057403 00:29:50.562 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:50.563 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:50.820 [2024-11-18 18:36:49.008095] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3057091 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3057091 ']' 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3057091 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057091 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057091' 00:29:56.085 killing process with pid 3057091 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3057091 00:29:56.085 18:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3057091 00:29:56.085 [2024-11-18 18:36:53.938415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.938801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.940481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:56.085 [2024-11-18 18:36:53.941842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.941898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.941930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.941954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.941976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.941996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.942017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.942039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.942061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.946707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.949842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 [2024-11-18 18:36:53.955812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.955864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 [2024-11-18 18:36:53.955898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 [2024-11-18 18:36:53.955919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 [2024-11-18 18:36:53.955938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.955961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 [2024-11-18 18:36:53.955982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 [2024-11-18 18:36:53.956008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 [2024-11-18 18:36:53.957231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.086 starting I/O failed: -6 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.086 starting I/O failed: -6 00:29:56.086 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 [2024-11-18 18:36:53.958896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 [2024-11-18 18:36:53.958947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 [2024-11-18 18:36:53.958974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same starting I/O failed: -6 00:29:56.087 with the state(6) to be set 00:29:56.087 [2024-11-18 18:36:53.959007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 [2024-11-18 18:36:53.959034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 [2024-11-18 18:36:53.959564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 [2024-11-18 18:36:53.962187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.087 starting I/O failed: -6 00:29:56.087 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 [2024-11-18 18:36:53.971794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.088 NVMe io qpair process completion error 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 [2024-11-18 18:36:53.973631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.088 starting I/O failed: -6 00:29:56.088 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 [2024-11-18 18:36:53.975551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 [2024-11-18 18:36:53.978258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.089 Write completed with error (sct=0, sc=8) 00:29:56.089 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 [2024-11-18 18:36:53.991155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.090 NVMe io qpair process completion error 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 [2024-11-18 18:36:53.993454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 starting I/O failed: -6 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.090 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 [2024-11-18 18:36:53.995661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 starting I/O failed: -6 00:29:56.091 Write completed with error (sct=0, sc=8) 00:29:56.091 [2024-11-18 18:36:53.998347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 [2024-11-18 18:36:54.008160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.092 NVMe io qpair process completion error 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 Write completed with error (sct=0, sc=8) 00:29:56.092 starting I/O failed: -6 00:29:56.093 [2024-11-18 18:36:54.010576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 [2024-11-18 18:36:54.012517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 [2024-11-18 18:36:54.015222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.093 starting I/O failed: -6 00:29:56.093 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 [2024-11-18 18:36:54.027703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.094 NVMe io qpair process completion error 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 [2024-11-18 18:36:54.029475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.094 starting I/O failed: -6 00:29:56.094 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 [2024-11-18 18:36:54.031585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 [2024-11-18 18:36:54.034298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.095 Write completed with error (sct=0, sc=8) 00:29:56.095 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 [2024-11-18 18:36:54.046627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.096 NVMe io qpair process completion error 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 [2024-11-18 18:36:54.048602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 starting I/O failed: -6 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.096 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 [2024-11-18 18:36:54.050585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 [2024-11-18 18:36:54.053240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.097 starting I/O failed: -6 00:29:56.097 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 [2024-11-18 18:36:54.070971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.098 NVMe io qpair process completion error 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 [2024-11-18 18:36:54.073220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 starting I/O failed: -6 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.098 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 [2024-11-18 18:36:54.075159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 [2024-11-18 18:36:54.077743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.099 starting I/O failed: -6 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.099 Write completed with error (sct=0, sc=8) 00:29:56.099 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 [2024-11-18 18:36:54.087377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.100 NVMe io qpair process completion error 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 [2024-11-18 18:36:54.089366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 [2024-11-18 18:36:54.091493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 starting I/O failed: -6 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.100 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 [2024-11-18 18:36:54.094267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.101 starting I/O failed: -6 00:29:56.101 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 [2024-11-18 18:36:54.103772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.102 NVMe io qpair process completion error 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 [2024-11-18 18:36:54.105773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 [2024-11-18 18:36:54.107882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.102 Write completed with error (sct=0, sc=8) 00:29:56.102 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 [2024-11-18 18:36:54.110533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.103 starting I/O failed: -6 00:29:56.103 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 [2024-11-18 18:36:54.123146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.104 NVMe io qpair process completion error 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 [2024-11-18 18:36:54.125179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.104 [2024-11-18 18:36:54.127398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 Write completed with error (sct=0, sc=8) 00:29:56.104 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 [2024-11-18 18:36:54.130106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.105 starting I/O failed: -6 00:29:56.105 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 Write completed with error (sct=0, sc=8) 00:29:56.106 starting I/O failed: -6 00:29:56.106 [2024-11-18 18:36:54.145531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.106 NVMe io qpair process completion error 00:29:56.106 [2024-11-18 18:36:54.166414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.166656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.166765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.166858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.166968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.167057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.167163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.167270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.167373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:56.106 Initializing NVMe Controllers 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:56.106 Controller IO queue size 128, less than required. 00:29:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:56.106 Initialization complete. Launching workers. 00:29:56.106 ======================================================== 00:29:56.106 Latency(us) 00:29:56.106 Device Information : IOPS MiB/s Average min max 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1366.89 58.73 93675.21 2014.08 208731.72 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1373.98 59.04 93327.02 1507.62 251375.94 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1394.19 59.91 92129.72 1603.45 264389.20 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1402.53 60.27 91754.87 2272.67 246655.73 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1410.03 60.59 87917.87 1658.42 172965.09 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1408.37 60.52 88152.35 1737.83 167242.71 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1389.61 59.71 89543.02 2189.48 173950.68 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1390.24 59.74 89651.30 1668.91 187134.94 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1386.48 59.58 90078.90 2129.09 169693.26 00:29:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1385.03 59.51 90379.99 1954.65 187424.24 00:29:56.106 ======================================================== 00:29:56.106 Total : 13907.36 597.58 90647.37 1507.62 264389.20 00:29:56.106 00:29:56.106 [2024-11-18 18:36:54.173956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:56.106 [2024-11-18 18:36:54.174345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000016d80 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000016880 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000017c80 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000018180 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000017280 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000018680 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015e80 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000016380 (9): Bad file descriptor 00:29:56.106 [2024-11-18 18:36:54.174755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000017780 (9): Bad file descriptor 00:29:56.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:58.635 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3057403 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3057403 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3057403 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.570 rmmod nvme_tcp 00:29:59.570 rmmod nvme_fabrics 00:29:59.570 rmmod nvme_keyring 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3057091 ']' 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3057091 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3057091 ']' 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3057091 00:29:59.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057091) - No such process 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057091 is not found' 00:29:59.570 Process with pid 3057091 is not found 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.570 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.099 00:30:02.099 real 0m13.331s 00:30:02.099 user 0m37.314s 00:30:02.099 sys 0m5.181s 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:02.099 ************************************ 00:30:02.099 END TEST nvmf_shutdown_tc4 00:30:02.099 ************************************ 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:02.099 00:30:02.099 real 0m54.922s 00:30:02.099 user 2m48.982s 00:30:02.099 sys 0m13.298s 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:02.099 ************************************ 00:30:02.099 END TEST nvmf_shutdown 00:30:02.099 ************************************ 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:02.099 ************************************ 00:30:02.099 START TEST nvmf_nsid 00:30:02.099 ************************************ 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:02.099 * Looking for test storage... 00:30:02.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:30:02.099 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:02.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.099 --rc genhtml_branch_coverage=1 00:30:02.099 --rc genhtml_function_coverage=1 00:30:02.099 --rc genhtml_legend=1 00:30:02.099 --rc geninfo_all_blocks=1 00:30:02.099 --rc geninfo_unexecuted_blocks=1 00:30:02.099 00:30:02.099 ' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:02.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.099 --rc genhtml_branch_coverage=1 00:30:02.099 --rc genhtml_function_coverage=1 00:30:02.099 --rc genhtml_legend=1 00:30:02.099 --rc geninfo_all_blocks=1 00:30:02.099 --rc geninfo_unexecuted_blocks=1 00:30:02.099 00:30:02.099 ' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:02.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.099 --rc genhtml_branch_coverage=1 00:30:02.099 --rc genhtml_function_coverage=1 00:30:02.099 --rc genhtml_legend=1 00:30:02.099 --rc geninfo_all_blocks=1 00:30:02.099 --rc geninfo_unexecuted_blocks=1 00:30:02.099 00:30:02.099 ' 00:30:02.099 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:02.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.099 --rc genhtml_branch_coverage=1 00:30:02.099 --rc genhtml_function_coverage=1 00:30:02.099 --rc genhtml_legend=1 00:30:02.099 --rc geninfo_all_blocks=1 00:30:02.100 --rc geninfo_unexecuted_blocks=1 00:30:02.100 00:30:02.100 ' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.100 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:04.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:04.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:04.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.000 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:04.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.001 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:30:04.001 00:30:04.001 --- 10.0.0.2 ping statistics --- 00:30:04.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.001 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:30:04.001 00:30:04.001 --- 10.0.0.1 ping statistics --- 00:30:04.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.001 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3060351 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3060351 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3060351 ']' 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.001 18:37:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:04.001 [2024-11-18 18:37:02.211296] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:04.001 [2024-11-18 18:37:02.211447] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.259 [2024-11-18 18:37:02.363856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.259 [2024-11-18 18:37:02.499652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.259 [2024-11-18 18:37:02.499743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.259 [2024-11-18 18:37:02.499768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.259 [2024-11-18 18:37:02.499792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.259 [2024-11-18 18:37:02.499810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.260 [2024-11-18 18:37:02.501424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3060467 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.193 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5b2833a9-7795-40c1-9d94-e7bd01ffe74d 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6a1c2bb8-47f8-428e-bc57-52628460a477 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8065a937-d45d-4512-bd20-8d0664d17c6d 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.194 null0 00:30:05.194 null1 00:30:05.194 null2 00:30:05.194 [2024-11-18 18:37:03.267508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.194 [2024-11-18 18:37:03.291856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.194 [2024-11-18 18:37:03.324052] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:05.194 [2024-11-18 18:37:03.324192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060467 ] 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3060467 /var/tmp/tgt2.sock 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3060467 ']' 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.194 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.194 [2024-11-18 18:37:03.463738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.452 [2024-11-18 18:37:03.599193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.385 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.385 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:06.385 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:06.643 [2024-11-18 18:37:04.942297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.643 [2024-11-18 18:37:04.958656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:06.901 nvme0n1 nvme0n2 00:30:06.901 nvme1n1 00:30:06.901 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:06.901 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:06.902 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:07.468 18:37:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5b2833a9-7795-40c1-9d94-e7bd01ffe74d 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5b2833a9779540c19d94e7bd01ffe74d 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5B2833A9779540C19D94E7BD01FFE74D 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5B2833A9779540C19D94E7BD01FFE74D == \5\B\2\8\3\3\A\9\7\7\9\5\4\0\C\1\9\D\9\4\E\7\B\D\0\1\F\F\E\7\4\D ]] 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6a1c2bb8-47f8-428e-bc57-52628460a477 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6a1c2bb847f8428ebc5752628460a477 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6A1C2BB847F8428EBC5752628460A477 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6A1C2BB847F8428EBC5752628460A477 == \6\A\1\C\2\B\B\8\4\7\F\8\4\2\8\E\B\C\5\7\5\2\6\2\8\4\6\0\A\4\7\7 ]] 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8065a937-d45d-4512-bd20-8d0664d17c6d 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:08.402 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:08.660 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8065a937d45d4512bd208d0664d17c6d 00:30:08.660 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8065A937D45D4512BD208D0664D17C6D 00:30:08.660 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8065A937D45D4512BD208D0664D17C6D == \8\0\6\5\A\9\3\7\D\4\5\D\4\5\1\2\B\D\2\0\8\D\0\6\6\4\D\1\7\C\6\D ]] 00:30:08.660 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3060467 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3060467 ']' 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3060467 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060467 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060467' 00:30:08.919 killing process with pid 3060467 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3060467 00:30:08.919 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3060467 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.446 rmmod nvme_tcp 00:30:11.446 rmmod nvme_fabrics 00:30:11.446 rmmod nvme_keyring 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3060351 ']' 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3060351 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3060351 ']' 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3060351 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060351 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060351' 00:30:11.446 killing process with pid 3060351 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3060351 00:30:11.446 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3060351 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.379 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.314 00:30:14.314 real 0m12.573s 00:30:14.314 user 0m15.333s 00:30:14.314 sys 0m2.863s 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.314 ************************************ 00:30:14.314 END TEST nvmf_nsid 00:30:14.314 ************************************ 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:14.314 00:30:14.314 real 18m35.912s 00:30:14.314 user 51m12.890s 00:30:14.314 sys 3m30.018s 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.314 18:37:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:14.314 ************************************ 00:30:14.314 END TEST nvmf_target_extra 00:30:14.314 ************************************ 00:30:14.314 18:37:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:14.314 18:37:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.314 18:37:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.314 18:37:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.314 ************************************ 00:30:14.314 START TEST nvmf_host 00:30:14.314 ************************************ 00:30:14.314 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:14.314 * Looking for test storage... 00:30:14.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:14.315 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:14.315 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:14.315 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.574 --rc genhtml_branch_coverage=1 00:30:14.574 --rc genhtml_function_coverage=1 00:30:14.574 --rc genhtml_legend=1 00:30:14.574 --rc geninfo_all_blocks=1 00:30:14.574 --rc geninfo_unexecuted_blocks=1 00:30:14.574 00:30:14.574 ' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.574 --rc genhtml_branch_coverage=1 00:30:14.574 --rc genhtml_function_coverage=1 00:30:14.574 --rc genhtml_legend=1 00:30:14.574 --rc geninfo_all_blocks=1 00:30:14.574 --rc geninfo_unexecuted_blocks=1 00:30:14.574 00:30:14.574 ' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.574 --rc genhtml_branch_coverage=1 00:30:14.574 --rc genhtml_function_coverage=1 00:30:14.574 --rc genhtml_legend=1 00:30:14.574 --rc geninfo_all_blocks=1 00:30:14.574 --rc geninfo_unexecuted_blocks=1 00:30:14.574 00:30:14.574 ' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.574 --rc genhtml_branch_coverage=1 00:30:14.574 --rc genhtml_function_coverage=1 00:30:14.574 --rc genhtml_legend=1 00:30:14.574 --rc geninfo_all_blocks=1 00:30:14.574 --rc geninfo_unexecuted_blocks=1 00:30:14.574 00:30:14.574 ' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.574 ************************************ 00:30:14.574 START TEST nvmf_multicontroller 00:30:14.574 ************************************ 00:30:14.574 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:14.574 * Looking for test storage... 00:30:14.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.575 --rc genhtml_branch_coverage=1 00:30:14.575 --rc genhtml_function_coverage=1 00:30:14.575 --rc genhtml_legend=1 00:30:14.575 --rc geninfo_all_blocks=1 00:30:14.575 --rc geninfo_unexecuted_blocks=1 00:30:14.575 00:30:14.575 ' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.575 --rc genhtml_branch_coverage=1 00:30:14.575 --rc genhtml_function_coverage=1 00:30:14.575 --rc genhtml_legend=1 00:30:14.575 --rc geninfo_all_blocks=1 00:30:14.575 --rc geninfo_unexecuted_blocks=1 00:30:14.575 00:30:14.575 ' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.575 --rc genhtml_branch_coverage=1 00:30:14.575 --rc genhtml_function_coverage=1 00:30:14.575 --rc genhtml_legend=1 00:30:14.575 --rc geninfo_all_blocks=1 00:30:14.575 --rc geninfo_unexecuted_blocks=1 00:30:14.575 00:30:14.575 ' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.575 --rc genhtml_branch_coverage=1 00:30:14.575 --rc genhtml_function_coverage=1 00:30:14.575 --rc genhtml_legend=1 00:30:14.575 --rc geninfo_all_blocks=1 00:30:14.575 --rc geninfo_unexecuted_blocks=1 00:30:14.575 00:30:14.575 ' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:14.575 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.576 18:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.105 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:17.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:17.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:17.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:17.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.106 18:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:30:17.106 00:30:17.106 --- 10.0.0.2 ping statistics --- 00:30:17.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.106 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:17.106 00:30:17.106 --- 10.0.0.1 ping statistics --- 00:30:17.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.106 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3063404 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3063404 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3063404 ']' 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.106 18:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.107 [2024-11-18 18:37:15.196918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:17.107 [2024-11-18 18:37:15.197078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.107 [2024-11-18 18:37:15.344330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.364 [2024-11-18 18:37:15.483284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.365 [2024-11-18 18:37:15.483352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.365 [2024-11-18 18:37:15.483377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.365 [2024-11-18 18:37:15.483402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.365 [2024-11-18 18:37:15.483424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.365 [2024-11-18 18:37:15.486156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.365 [2024-11-18 18:37:15.486250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.365 [2024-11-18 18:37:15.486256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.930 [2024-11-18 18:37:16.175680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.930 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 Malloc0 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 [2024-11-18 18:37:16.290162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 [2024-11-18 18:37:16.298038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 Malloc1 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3063561 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3063561 /var/tmp/bdevperf.sock 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3063561 ']' 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.188 18:37:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.121 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.121 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:19.121 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:19.121 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.121 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.380 NVMe0n1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.380 1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.380 request: 00:30:19.380 { 00:30:19.380 "name": "NVMe0", 00:30:19.380 "trtype": "tcp", 00:30:19.380 "traddr": "10.0.0.2", 00:30:19.380 "adrfam": "ipv4", 00:30:19.380 "trsvcid": "4420", 00:30:19.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.380 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:19.380 "hostaddr": "10.0.0.1", 00:30:19.380 "prchk_reftag": false, 00:30:19.380 "prchk_guard": false, 00:30:19.380 "hdgst": false, 00:30:19.380 "ddgst": false, 00:30:19.380 "allow_unrecognized_csi": false, 00:30:19.380 "method": "bdev_nvme_attach_controller", 00:30:19.380 "req_id": 1 00:30:19.380 } 00:30:19.380 Got JSON-RPC error response 00:30:19.380 response: 00:30:19.380 { 00:30:19.380 "code": -114, 00:30:19.380 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:19.380 } 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.380 request: 00:30:19.380 { 00:30:19.380 "name": "NVMe0", 00:30:19.380 "trtype": "tcp", 00:30:19.380 "traddr": "10.0.0.2", 00:30:19.380 "adrfam": "ipv4", 00:30:19.380 "trsvcid": "4420", 00:30:19.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.380 "hostaddr": "10.0.0.1", 00:30:19.380 "prchk_reftag": false, 00:30:19.380 "prchk_guard": false, 00:30:19.380 "hdgst": false, 00:30:19.380 "ddgst": false, 00:30:19.380 "allow_unrecognized_csi": false, 00:30:19.380 "method": "bdev_nvme_attach_controller", 00:30:19.380 "req_id": 1 00:30:19.380 } 00:30:19.380 Got JSON-RPC error response 00:30:19.380 response: 00:30:19.380 { 00:30:19.380 "code": -114, 00:30:19.380 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:19.380 } 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:19.380 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.381 request: 00:30:19.381 { 00:30:19.381 "name": "NVMe0", 00:30:19.381 "trtype": "tcp", 00:30:19.381 "traddr": "10.0.0.2", 00:30:19.381 "adrfam": "ipv4", 00:30:19.381 "trsvcid": "4420", 00:30:19.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.381 "hostaddr": "10.0.0.1", 00:30:19.381 "prchk_reftag": false, 00:30:19.381 "prchk_guard": false, 00:30:19.381 "hdgst": false, 00:30:19.381 "ddgst": false, 00:30:19.381 "multipath": "disable", 00:30:19.381 "allow_unrecognized_csi": false, 00:30:19.381 "method": "bdev_nvme_attach_controller", 00:30:19.381 "req_id": 1 00:30:19.381 } 00:30:19.381 Got JSON-RPC error response 00:30:19.381 response: 00:30:19.381 { 00:30:19.381 "code": -114, 00:30:19.381 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:19.381 } 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.381 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.381 request: 00:30:19.381 { 00:30:19.381 "name": "NVMe0", 00:30:19.381 "trtype": "tcp", 00:30:19.381 "traddr": "10.0.0.2", 00:30:19.381 "adrfam": "ipv4", 00:30:19.381 "trsvcid": "4420", 00:30:19.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.381 "hostaddr": "10.0.0.1", 00:30:19.381 "prchk_reftag": false, 00:30:19.381 "prchk_guard": false, 00:30:19.381 "hdgst": false, 00:30:19.381 "ddgst": false, 00:30:19.381 "multipath": "failover", 00:30:19.381 "allow_unrecognized_csi": false, 00:30:19.381 "method": "bdev_nvme_attach_controller", 00:30:19.381 "req_id": 1 00:30:19.381 } 00:30:19.381 Got JSON-RPC error response 00:30:19.639 response: 00:30:19.639 { 00:30:19.639 "code": -114, 00:30:19.639 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:19.639 } 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.639 NVMe0n1 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.639 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:19.639 18:37:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:21.013 { 00:30:21.013 "results": [ 00:30:21.013 { 00:30:21.013 "job": "NVMe0n1", 00:30:21.013 "core_mask": "0x1", 00:30:21.013 "workload": "write", 00:30:21.013 "status": "finished", 00:30:21.013 "queue_depth": 128, 00:30:21.013 "io_size": 4096, 00:30:21.013 "runtime": 1.008424, 00:30:21.013 "iops": 13163.113928268269, 00:30:21.013 "mibps": 51.418413782297925, 00:30:21.013 "io_failed": 0, 00:30:21.013 "io_timeout": 0, 00:30:21.013 "avg_latency_us": 9692.437630120703, 00:30:21.013 "min_latency_us": 2572.8948148148147, 00:30:21.013 "max_latency_us": 19029.712592592594 00:30:21.013 } 00:30:21.013 ], 00:30:21.013 "core_count": 1 00:30:21.013 } 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3063561 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3063561 ']' 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3063561 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3063561 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.013 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3063561' 00:30:21.013 killing process with pid 3063561 00:30:21.014 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3063561 00:30:21.014 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3063561 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:21.947 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:21.947 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:21.947 [2024-11-18 18:37:16.486157] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:21.947 [2024-11-18 18:37:16.486313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063561 ] 00:30:21.947 [2024-11-18 18:37:16.624331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.947 [2024-11-18 18:37:16.751153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.947 [2024-11-18 18:37:17.901030] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name 290a943a-d502-44fe-90ad-655ca785cd1e already exists 00:30:21.947 [2024-11-18 18:37:17.901083] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:290a943a-d502-44fe-90ad-655ca785cd1e alias for bdev NVMe1n1 00:30:21.947 [2024-11-18 18:37:17.901126] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:21.947 Running I/O for 1 seconds... 00:30:21.947 13082.00 IOPS, 51.10 MiB/s 00:30:21.947 Latency(us) 00:30:21.947 [2024-11-18T17:37:20.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.947 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:21.947 NVMe0n1 : 1.01 13163.11 51.42 0.00 0.00 9692.44 2572.89 19029.71 00:30:21.947 [2024-11-18T17:37:20.285Z] =================================================================================================================== 00:30:21.948 [2024-11-18T17:37:20.285Z] Total : 13163.11 51.42 0.00 0.00 9692.44 2572.89 19029.71 00:30:21.948 Received shutdown signal, test time was about 1.000000 seconds 00:30:21.948 00:30:21.948 Latency(us) 00:30:21.948 [2024-11-18T17:37:20.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.948 [2024-11-18T17:37:20.285Z] =================================================================================================================== 00:30:21.948 [2024-11-18T17:37:20.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.948 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.948 rmmod nvme_tcp 00:30:21.948 rmmod nvme_fabrics 00:30:21.948 rmmod nvme_keyring 00:30:21.948 18:37:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3063404 ']' 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3063404 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3063404 ']' 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3063404 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3063404 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3063404' 00:30:21.948 killing process with pid 3063404 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3063404 00:30:21.948 18:37:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3063404 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.321 18:37:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.221 00:30:25.221 real 0m10.744s 00:30:25.221 user 0m21.915s 00:30:25.221 sys 0m2.657s 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.221 ************************************ 00:30:25.221 END TEST nvmf_multicontroller 00:30:25.221 ************************************ 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.221 ************************************ 00:30:25.221 START TEST nvmf_aer 00:30:25.221 ************************************ 00:30:25.221 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:25.480 * Looking for test storage... 00:30:25.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.480 --rc genhtml_branch_coverage=1 00:30:25.480 --rc genhtml_function_coverage=1 00:30:25.480 --rc genhtml_legend=1 00:30:25.480 --rc geninfo_all_blocks=1 00:30:25.480 --rc geninfo_unexecuted_blocks=1 00:30:25.480 00:30:25.480 ' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.480 --rc genhtml_branch_coverage=1 00:30:25.480 --rc genhtml_function_coverage=1 00:30:25.480 --rc genhtml_legend=1 00:30:25.480 --rc geninfo_all_blocks=1 00:30:25.480 --rc geninfo_unexecuted_blocks=1 00:30:25.480 00:30:25.480 ' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.480 --rc genhtml_branch_coverage=1 00:30:25.480 --rc genhtml_function_coverage=1 00:30:25.480 --rc genhtml_legend=1 00:30:25.480 --rc geninfo_all_blocks=1 00:30:25.480 --rc geninfo_unexecuted_blocks=1 00:30:25.480 00:30:25.480 ' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.480 --rc genhtml_branch_coverage=1 00:30:25.480 --rc genhtml_function_coverage=1 00:30:25.480 --rc genhtml_legend=1 00:30:25.480 --rc geninfo_all_blocks=1 00:30:25.480 --rc geninfo_unexecuted_blocks=1 00:30:25.480 00:30:25.480 ' 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.480 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:25.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.381 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:27.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:27.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:27.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:27.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.382 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:30:27.641 00:30:27.641 --- 10.0.0.2 ping statistics --- 00:30:27.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.641 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:30:27.641 00:30:27.641 --- 10.0.0.1 ping statistics --- 00:30:27.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.641 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3066049 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3066049 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3066049 ']' 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.641 18:37:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.641 [2024-11-18 18:37:25.882520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:27.641 [2024-11-18 18:37:25.882664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.899 [2024-11-18 18:37:26.035152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.899 [2024-11-18 18:37:26.176995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.899 [2024-11-18 18:37:26.177077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.899 [2024-11-18 18:37:26.177103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.899 [2024-11-18 18:37:26.177127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.899 [2024-11-18 18:37:26.177147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.899 [2024-11-18 18:37:26.179893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.899 [2024-11-18 18:37:26.179963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.900 [2024-11-18 18:37:26.180061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.900 [2024-11-18 18:37:26.180068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 [2024-11-18 18:37:26.904784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.832 18:37:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 Malloc0 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.832 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.833 [2024-11-18 18:37:27.024005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.833 [ 00:30:28.833 { 00:30:28.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:28.833 "subtype": "Discovery", 00:30:28.833 "listen_addresses": [], 00:30:28.833 "allow_any_host": true, 00:30:28.833 "hosts": [] 00:30:28.833 }, 00:30:28.833 { 00:30:28.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.833 "subtype": "NVMe", 00:30:28.833 "listen_addresses": [ 00:30:28.833 { 00:30:28.833 "trtype": "TCP", 00:30:28.833 "adrfam": "IPv4", 00:30:28.833 "traddr": "10.0.0.2", 00:30:28.833 "trsvcid": "4420" 00:30:28.833 } 00:30:28.833 ], 00:30:28.833 "allow_any_host": true, 00:30:28.833 "hosts": [], 00:30:28.833 "serial_number": "SPDK00000000000001", 00:30:28.833 "model_number": "SPDK bdev Controller", 00:30:28.833 "max_namespaces": 2, 00:30:28.833 "min_cntlid": 1, 00:30:28.833 "max_cntlid": 65519, 00:30:28.833 "namespaces": [ 00:30:28.833 { 00:30:28.833 "nsid": 1, 00:30:28.833 "bdev_name": "Malloc0", 00:30:28.833 "name": "Malloc0", 00:30:28.833 "nguid": "3954EEDA37BF45ADA8B40E9FEA2AB0D5", 00:30:28.833 "uuid": "3954eeda-37bf-45ad-a8b4-0e9fea2ab0d5" 00:30:28.833 } 00:30:28.833 ] 00:30:28.833 } 00:30:28.833 ] 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3066209 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:28.833 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:29.090 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.090 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:29.091 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.348 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.348 Malloc1 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.349 [ 00:30:29.349 { 00:30:29.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:29.349 "subtype": "Discovery", 00:30:29.349 "listen_addresses": [], 00:30:29.349 "allow_any_host": true, 00:30:29.349 "hosts": [] 00:30:29.349 }, 00:30:29.349 { 00:30:29.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.349 "subtype": "NVMe", 00:30:29.349 "listen_addresses": [ 00:30:29.349 { 00:30:29.349 "trtype": "TCP", 00:30:29.349 "adrfam": "IPv4", 00:30:29.349 "traddr": "10.0.0.2", 00:30:29.349 "trsvcid": "4420" 00:30:29.349 } 00:30:29.349 ], 00:30:29.349 "allow_any_host": true, 00:30:29.349 "hosts": [], 00:30:29.349 "serial_number": "SPDK00000000000001", 00:30:29.349 "model_number": "SPDK bdev Controller", 00:30:29.349 "max_namespaces": 2, 00:30:29.349 "min_cntlid": 1, 00:30:29.349 "max_cntlid": 65519, 00:30:29.349 "namespaces": [ 00:30:29.349 { 00:30:29.349 "nsid": 1, 00:30:29.349 "bdev_name": "Malloc0", 00:30:29.349 "name": "Malloc0", 00:30:29.349 "nguid": "3954EEDA37BF45ADA8B40E9FEA2AB0D5", 00:30:29.349 "uuid": "3954eeda-37bf-45ad-a8b4-0e9fea2ab0d5" 00:30:29.349 }, 00:30:29.349 { 00:30:29.349 "nsid": 2, 00:30:29.349 "bdev_name": "Malloc1", 00:30:29.349 "name": "Malloc1", 00:30:29.349 "nguid": "EA2A7B75274E4E3AB145DCEDF7C14BDB", 00:30:29.349 "uuid": "ea2a7b75-274e-4e3a-b145-dcedf7c14bdb" 00:30:29.349 } 00:30:29.349 ] 00:30:29.349 } 00:30:29.349 ] 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.349 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3066209 00:30:29.349 Asynchronous Event Request test 00:30:29.349 Attaching to 10.0.0.2 00:30:29.349 Attached to 10.0.0.2 00:30:29.349 Registering asynchronous event callbacks... 00:30:29.349 Starting namespace attribute notice tests for all controllers... 00:30:29.349 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:29.349 aer_cb - Changed Namespace 00:30:29.349 Cleaning up... 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.606 18:37:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.864 rmmod nvme_tcp 00:30:29.864 rmmod nvme_fabrics 00:30:29.864 rmmod nvme_keyring 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3066049 ']' 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3066049 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3066049 ']' 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3066049 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3066049 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3066049' 00:30:29.864 killing process with pid 3066049 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3066049 00:30:29.864 18:37:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3066049 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.236 18:37:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.135 00:30:33.135 real 0m7.773s 00:30:33.135 user 0m12.031s 00:30:33.135 sys 0m2.227s 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:33.135 ************************************ 00:30:33.135 END TEST nvmf_aer 00:30:33.135 ************************************ 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.135 ************************************ 00:30:33.135 START TEST nvmf_async_init 00:30:33.135 ************************************ 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.135 * Looking for test storage... 00:30:33.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:33.135 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:33.393 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:33.393 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.394 --rc genhtml_branch_coverage=1 00:30:33.394 --rc genhtml_function_coverage=1 00:30:33.394 --rc genhtml_legend=1 00:30:33.394 --rc geninfo_all_blocks=1 00:30:33.394 --rc geninfo_unexecuted_blocks=1 00:30:33.394 00:30:33.394 ' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.394 --rc genhtml_branch_coverage=1 00:30:33.394 --rc genhtml_function_coverage=1 00:30:33.394 --rc genhtml_legend=1 00:30:33.394 --rc geninfo_all_blocks=1 00:30:33.394 --rc geninfo_unexecuted_blocks=1 00:30:33.394 00:30:33.394 ' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.394 --rc genhtml_branch_coverage=1 00:30:33.394 --rc genhtml_function_coverage=1 00:30:33.394 --rc genhtml_legend=1 00:30:33.394 --rc geninfo_all_blocks=1 00:30:33.394 --rc geninfo_unexecuted_blocks=1 00:30:33.394 00:30:33.394 ' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.394 --rc genhtml_branch_coverage=1 00:30:33.394 --rc genhtml_function_coverage=1 00:30:33.394 --rc genhtml_legend=1 00:30:33.394 --rc geninfo_all_blocks=1 00:30:33.394 --rc geninfo_unexecuted_blocks=1 00:30:33.394 00:30:33.394 ' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:33.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ce9db4e8ac9842979ef870dd031ce473 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.394 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.395 18:37:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:35.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:35.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:35.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:35.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.295 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:30:35.296 00:30:35.296 --- 10.0.0.2 ping statistics --- 00:30:35.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.296 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:30:35.296 00:30:35.296 --- 10.0.0.1 ping statistics --- 00:30:35.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.296 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.296 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3068399 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3068399 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3068399 ']' 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.554 18:37:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.554 [2024-11-18 18:37:33.724299] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:35.554 [2024-11-18 18:37:33.724437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.554 [2024-11-18 18:37:33.877658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.812 [2024-11-18 18:37:34.017455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.812 [2024-11-18 18:37:34.017547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.812 [2024-11-18 18:37:34.017574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.812 [2024-11-18 18:37:34.017599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.812 [2024-11-18 18:37:34.017629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.812 [2024-11-18 18:37:34.019250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 [2024-11-18 18:37:34.759507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 null0 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ce9db4e8ac9842979ef870dd031ce473 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 [2024-11-18 18:37:34.799842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 nvme0n1 00:30:36.746 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.746 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.746 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.746 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.746 [ 00:30:36.746 { 00:30:36.746 "name": "nvme0n1", 00:30:36.746 "aliases": [ 00:30:36.746 "ce9db4e8-ac98-4297-9ef8-70dd031ce473" 00:30:36.746 ], 00:30:36.746 "product_name": "NVMe disk", 00:30:36.746 "block_size": 512, 00:30:36.746 "num_blocks": 2097152, 00:30:36.746 "uuid": "ce9db4e8-ac98-4297-9ef8-70dd031ce473", 00:30:36.746 "numa_id": 0, 00:30:36.746 "assigned_rate_limits": { 00:30:36.746 "rw_ios_per_sec": 0, 00:30:36.746 "rw_mbytes_per_sec": 0, 00:30:36.746 "r_mbytes_per_sec": 0, 00:30:36.746 "w_mbytes_per_sec": 0 00:30:36.746 }, 00:30:36.746 "claimed": false, 00:30:36.746 "zoned": false, 00:30:36.746 "supported_io_types": { 00:30:36.746 "read": true, 00:30:36.746 "write": true, 00:30:36.746 "unmap": false, 00:30:36.746 "flush": true, 00:30:36.746 "reset": true, 00:30:36.746 "nvme_admin": true, 00:30:36.746 "nvme_io": true, 00:30:36.746 "nvme_io_md": false, 00:30:36.746 "write_zeroes": true, 00:30:36.747 "zcopy": false, 00:30:36.747 "get_zone_info": false, 00:30:36.747 "zone_management": false, 00:30:36.747 "zone_append": false, 00:30:36.747 "compare": true, 00:30:36.747 "compare_and_write": true, 00:30:36.747 "abort": true, 00:30:36.747 "seek_hole": false, 00:30:36.747 "seek_data": false, 00:30:36.747 "copy": true, 00:30:36.747 "nvme_iov_md": false 00:30:36.747 }, 00:30:36.747 "memory_domains": [ 00:30:36.747 { 00:30:36.747 "dma_device_id": "system", 00:30:36.747 "dma_device_type": 1 00:30:36.747 } 00:30:36.747 ], 00:30:36.747 "driver_specific": { 00:30:36.747 "nvme": [ 00:30:36.747 { 00:30:36.747 "trid": { 00:30:36.747 "trtype": "TCP", 00:30:36.747 "adrfam": "IPv4", 00:30:36.747 "traddr": "10.0.0.2", 00:30:36.747 "trsvcid": "4420", 00:30:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.747 }, 00:30:36.747 "ctrlr_data": { 00:30:36.747 "cntlid": 1, 00:30:36.747 "vendor_id": "0x8086", 00:30:36.747 "model_number": "SPDK bdev Controller", 00:30:36.747 "serial_number": "00000000000000000000", 00:30:36.747 "firmware_revision": "25.01", 00:30:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.747 "oacs": { 00:30:36.747 "security": 0, 00:30:36.747 "format": 0, 00:30:36.747 "firmware": 0, 00:30:36.747 "ns_manage": 0 00:30:36.747 }, 00:30:36.747 "multi_ctrlr": true, 00:30:36.747 "ana_reporting": false 00:30:36.747 }, 00:30:36.747 "vs": { 00:30:36.747 "nvme_version": "1.3" 00:30:36.747 }, 00:30:36.747 "ns_data": { 00:30:36.747 "id": 1, 00:30:36.747 "can_share": true 00:30:36.747 } 00:30:36.747 } 00:30:36.747 ], 00:30:36.747 "mp_policy": "active_passive" 00:30:36.747 } 00:30:36.747 } 00:30:36.747 ] 00:30:36.747 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.747 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:36.747 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.747 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.747 [2024-11-18 18:37:35.056352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:36.747 [2024-11-18 18:37:35.056490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:37.005 [2024-11-18 18:37:35.188865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:37.005 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.005 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:37.005 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.005 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.005 [ 00:30:37.005 { 00:30:37.005 "name": "nvme0n1", 00:30:37.005 "aliases": [ 00:30:37.005 "ce9db4e8-ac98-4297-9ef8-70dd031ce473" 00:30:37.005 ], 00:30:37.005 "product_name": "NVMe disk", 00:30:37.005 "block_size": 512, 00:30:37.005 "num_blocks": 2097152, 00:30:37.006 "uuid": "ce9db4e8-ac98-4297-9ef8-70dd031ce473", 00:30:37.006 "numa_id": 0, 00:30:37.006 "assigned_rate_limits": { 00:30:37.006 "rw_ios_per_sec": 0, 00:30:37.006 "rw_mbytes_per_sec": 0, 00:30:37.006 "r_mbytes_per_sec": 0, 00:30:37.006 "w_mbytes_per_sec": 0 00:30:37.006 }, 00:30:37.006 "claimed": false, 00:30:37.006 "zoned": false, 00:30:37.006 "supported_io_types": { 00:30:37.006 "read": true, 00:30:37.006 "write": true, 00:30:37.006 "unmap": false, 00:30:37.006 "flush": true, 00:30:37.006 "reset": true, 00:30:37.006 "nvme_admin": true, 00:30:37.006 "nvme_io": true, 00:30:37.006 "nvme_io_md": false, 00:30:37.006 "write_zeroes": true, 00:30:37.006 "zcopy": false, 00:30:37.006 "get_zone_info": false, 00:30:37.006 "zone_management": false, 00:30:37.006 "zone_append": false, 00:30:37.006 "compare": true, 00:30:37.006 "compare_and_write": true, 00:30:37.006 "abort": true, 00:30:37.006 "seek_hole": false, 00:30:37.006 "seek_data": false, 00:30:37.006 "copy": true, 00:30:37.006 "nvme_iov_md": false 00:30:37.006 }, 00:30:37.006 "memory_domains": [ 00:30:37.006 { 00:30:37.006 "dma_device_id": "system", 00:30:37.006 "dma_device_type": 1 00:30:37.006 } 00:30:37.006 ], 00:30:37.006 "driver_specific": { 00:30:37.006 "nvme": [ 00:30:37.006 { 00:30:37.006 "trid": { 00:30:37.006 "trtype": "TCP", 00:30:37.006 "adrfam": "IPv4", 00:30:37.006 "traddr": "10.0.0.2", 00:30:37.006 "trsvcid": "4420", 00:30:37.006 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:37.006 }, 00:30:37.006 "ctrlr_data": { 00:30:37.006 "cntlid": 2, 00:30:37.006 "vendor_id": "0x8086", 00:30:37.006 "model_number": "SPDK bdev Controller", 00:30:37.006 "serial_number": "00000000000000000000", 00:30:37.006 "firmware_revision": "25.01", 00:30:37.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.006 "oacs": { 00:30:37.006 "security": 0, 00:30:37.006 "format": 0, 00:30:37.006 "firmware": 0, 00:30:37.006 "ns_manage": 0 00:30:37.006 }, 00:30:37.006 "multi_ctrlr": true, 00:30:37.006 "ana_reporting": false 00:30:37.006 }, 00:30:37.006 "vs": { 00:30:37.006 "nvme_version": "1.3" 00:30:37.006 }, 00:30:37.006 "ns_data": { 00:30:37.006 "id": 1, 00:30:37.006 "can_share": true 00:30:37.006 } 00:30:37.006 } 00:30:37.006 ], 00:30:37.006 "mp_policy": "active_passive" 00:30:37.006 } 00:30:37.006 } 00:30:37.006 ] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UzYlCTa6H5 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UzYlCTa6H5 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UzYlCTa6H5 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 [2024-11-18 18:37:35.249121] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:37.006 [2024-11-18 18:37:35.249413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.006 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.006 [2024-11-18 18:37:35.265149] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:37.264 nvme0n1 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.264 [ 00:30:37.264 { 00:30:37.264 "name": "nvme0n1", 00:30:37.264 "aliases": [ 00:30:37.264 "ce9db4e8-ac98-4297-9ef8-70dd031ce473" 00:30:37.264 ], 00:30:37.264 "product_name": "NVMe disk", 00:30:37.264 "block_size": 512, 00:30:37.264 "num_blocks": 2097152, 00:30:37.264 "uuid": "ce9db4e8-ac98-4297-9ef8-70dd031ce473", 00:30:37.264 "numa_id": 0, 00:30:37.264 "assigned_rate_limits": { 00:30:37.264 "rw_ios_per_sec": 0, 00:30:37.264 "rw_mbytes_per_sec": 0, 00:30:37.264 "r_mbytes_per_sec": 0, 00:30:37.264 "w_mbytes_per_sec": 0 00:30:37.264 }, 00:30:37.264 "claimed": false, 00:30:37.264 "zoned": false, 00:30:37.264 "supported_io_types": { 00:30:37.264 "read": true, 00:30:37.264 "write": true, 00:30:37.264 "unmap": false, 00:30:37.264 "flush": true, 00:30:37.264 "reset": true, 00:30:37.264 "nvme_admin": true, 00:30:37.264 "nvme_io": true, 00:30:37.264 "nvme_io_md": false, 00:30:37.264 "write_zeroes": true, 00:30:37.264 "zcopy": false, 00:30:37.264 "get_zone_info": false, 00:30:37.264 "zone_management": false, 00:30:37.264 "zone_append": false, 00:30:37.264 "compare": true, 00:30:37.264 "compare_and_write": true, 00:30:37.264 "abort": true, 00:30:37.264 "seek_hole": false, 00:30:37.264 "seek_data": false, 00:30:37.264 "copy": true, 00:30:37.264 "nvme_iov_md": false 00:30:37.264 }, 00:30:37.264 "memory_domains": [ 00:30:37.264 { 00:30:37.264 "dma_device_id": "system", 00:30:37.264 "dma_device_type": 1 00:30:37.264 } 00:30:37.264 ], 00:30:37.264 "driver_specific": { 00:30:37.264 "nvme": [ 00:30:37.264 { 00:30:37.264 "trid": { 00:30:37.264 "trtype": "TCP", 00:30:37.264 "adrfam": "IPv4", 00:30:37.264 "traddr": "10.0.0.2", 00:30:37.264 "trsvcid": "4421", 00:30:37.264 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:37.264 }, 00:30:37.264 "ctrlr_data": { 00:30:37.264 "cntlid": 3, 00:30:37.264 "vendor_id": "0x8086", 00:30:37.264 "model_number": "SPDK bdev Controller", 00:30:37.264 "serial_number": "00000000000000000000", 00:30:37.264 "firmware_revision": "25.01", 00:30:37.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.264 "oacs": { 00:30:37.264 "security": 0, 00:30:37.264 "format": 0, 00:30:37.264 "firmware": 0, 00:30:37.264 "ns_manage": 0 00:30:37.264 }, 00:30:37.264 "multi_ctrlr": true, 00:30:37.264 "ana_reporting": false 00:30:37.264 }, 00:30:37.264 "vs": { 00:30:37.264 "nvme_version": "1.3" 00:30:37.264 }, 00:30:37.264 "ns_data": { 00:30:37.264 "id": 1, 00:30:37.264 "can_share": true 00:30:37.264 } 00:30:37.264 } 00:30:37.264 ], 00:30:37.264 "mp_policy": "active_passive" 00:30:37.264 } 00:30:37.264 } 00:30:37.264 ] 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.264 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UzYlCTa6H5 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.265 rmmod nvme_tcp 00:30:37.265 rmmod nvme_fabrics 00:30:37.265 rmmod nvme_keyring 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3068399 ']' 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3068399 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3068399 ']' 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3068399 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3068399 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3068399' 00:30:37.265 killing process with pid 3068399 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3068399 00:30:37.265 18:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3068399 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.639 18:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.541 00:30:40.541 real 0m7.307s 00:30:40.541 user 0m4.017s 00:30:40.541 sys 0m2.027s 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.541 ************************************ 00:30:40.541 END TEST nvmf_async_init 00:30:40.541 ************************************ 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.541 ************************************ 00:30:40.541 START TEST dma 00:30:40.541 ************************************ 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:40.541 * Looking for test storage... 00:30:40.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:40.541 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.542 --rc genhtml_branch_coverage=1 00:30:40.542 --rc genhtml_function_coverage=1 00:30:40.542 --rc genhtml_legend=1 00:30:40.542 --rc geninfo_all_blocks=1 00:30:40.542 --rc geninfo_unexecuted_blocks=1 00:30:40.542 00:30:40.542 ' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.542 --rc genhtml_branch_coverage=1 00:30:40.542 --rc genhtml_function_coverage=1 00:30:40.542 --rc genhtml_legend=1 00:30:40.542 --rc geninfo_all_blocks=1 00:30:40.542 --rc geninfo_unexecuted_blocks=1 00:30:40.542 00:30:40.542 ' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.542 --rc genhtml_branch_coverage=1 00:30:40.542 --rc genhtml_function_coverage=1 00:30:40.542 --rc genhtml_legend=1 00:30:40.542 --rc geninfo_all_blocks=1 00:30:40.542 --rc geninfo_unexecuted_blocks=1 00:30:40.542 00:30:40.542 ' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.542 --rc genhtml_branch_coverage=1 00:30:40.542 --rc genhtml_function_coverage=1 00:30:40.542 --rc genhtml_legend=1 00:30:40.542 --rc geninfo_all_blocks=1 00:30:40.542 --rc geninfo_unexecuted_blocks=1 00:30:40.542 00:30:40.542 ' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:40.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:40.542 00:30:40.542 real 0m0.158s 00:30:40.542 user 0m0.108s 00:30:40.542 sys 0m0.059s 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.542 18:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:40.542 ************************************ 00:30:40.542 END TEST dma 00:30:40.542 ************************************ 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.802 ************************************ 00:30:40.802 START TEST nvmf_identify 00:30:40.802 ************************************ 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:40.802 * Looking for test storage... 00:30:40.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:40.802 18:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:40.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.802 --rc genhtml_branch_coverage=1 00:30:40.802 --rc genhtml_function_coverage=1 00:30:40.802 --rc genhtml_legend=1 00:30:40.802 --rc geninfo_all_blocks=1 00:30:40.802 --rc geninfo_unexecuted_blocks=1 00:30:40.802 00:30:40.802 ' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:40.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.802 --rc genhtml_branch_coverage=1 00:30:40.802 --rc genhtml_function_coverage=1 00:30:40.802 --rc genhtml_legend=1 00:30:40.802 --rc geninfo_all_blocks=1 00:30:40.802 --rc geninfo_unexecuted_blocks=1 00:30:40.802 00:30:40.802 ' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:40.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.802 --rc genhtml_branch_coverage=1 00:30:40.802 --rc genhtml_function_coverage=1 00:30:40.802 --rc genhtml_legend=1 00:30:40.802 --rc geninfo_all_blocks=1 00:30:40.802 --rc geninfo_unexecuted_blocks=1 00:30:40.802 00:30:40.802 ' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:40.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.802 --rc genhtml_branch_coverage=1 00:30:40.802 --rc genhtml_function_coverage=1 00:30:40.802 --rc genhtml_legend=1 00:30:40.802 --rc geninfo_all_blocks=1 00:30:40.802 --rc geninfo_unexecuted_blocks=1 00:30:40.802 00:30:40.802 ' 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.802 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:40.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.803 18:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.340 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:43.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:43.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:43.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:43.341 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:30:43.341 00:30:43.341 --- 10.0.0.2 ping statistics --- 00:30:43.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.341 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:30:43.341 00:30:43.341 --- 10.0.0.1 ping statistics --- 00:30:43.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.341 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:43.341 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3070801 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3070801 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3070801 ']' 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.342 18:37:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.342 [2024-11-18 18:37:41.346709] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:43.342 [2024-11-18 18:37:41.346859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.342 [2024-11-18 18:37:41.490468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.342 [2024-11-18 18:37:41.619565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.342 [2024-11-18 18:37:41.619658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.342 [2024-11-18 18:37:41.619685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.342 [2024-11-18 18:37:41.619710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.342 [2024-11-18 18:37:41.619729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.342 [2024-11-18 18:37:41.622558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.342 [2024-11-18 18:37:41.622644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.342 [2024-11-18 18:37:41.622687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.342 [2024-11-18 18:37:41.622710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.276 [2024-11-18 18:37:42.342797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.276 Malloc0 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.276 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.277 [2024-11-18 18:37:42.484093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.277 [ 00:30:44.277 { 00:30:44.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:44.277 "subtype": "Discovery", 00:30:44.277 "listen_addresses": [ 00:30:44.277 { 00:30:44.277 "trtype": "TCP", 00:30:44.277 "adrfam": "IPv4", 00:30:44.277 "traddr": "10.0.0.2", 00:30:44.277 "trsvcid": "4420" 00:30:44.277 } 00:30:44.277 ], 00:30:44.277 "allow_any_host": true, 00:30:44.277 "hosts": [] 00:30:44.277 }, 00:30:44.277 { 00:30:44.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.277 "subtype": "NVMe", 00:30:44.277 "listen_addresses": [ 00:30:44.277 { 00:30:44.277 "trtype": "TCP", 00:30:44.277 "adrfam": "IPv4", 00:30:44.277 "traddr": "10.0.0.2", 00:30:44.277 "trsvcid": "4420" 00:30:44.277 } 00:30:44.277 ], 00:30:44.277 "allow_any_host": true, 00:30:44.277 "hosts": [], 00:30:44.277 "serial_number": "SPDK00000000000001", 00:30:44.277 "model_number": "SPDK bdev Controller", 00:30:44.277 "max_namespaces": 32, 00:30:44.277 "min_cntlid": 1, 00:30:44.277 "max_cntlid": 65519, 00:30:44.277 "namespaces": [ 00:30:44.277 { 00:30:44.277 "nsid": 1, 00:30:44.277 "bdev_name": "Malloc0", 00:30:44.277 "name": "Malloc0", 00:30:44.277 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:44.277 "eui64": "ABCDEF0123456789", 00:30:44.277 "uuid": "5266cd23-595d-49c5-bad9-ff33e1faa5d9" 00:30:44.277 } 00:30:44.277 ] 00:30:44.277 } 00:30:44.277 ] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.277 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:44.277 [2024-11-18 18:37:42.555025] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:44.277 [2024-11-18 18:37:42.555145] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070955 ] 00:30:44.537 [2024-11-18 18:37:42.642463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:44.537 [2024-11-18 18:37:42.642583] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:44.537 [2024-11-18 18:37:42.642631] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:44.537 [2024-11-18 18:37:42.642668] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:44.537 [2024-11-18 18:37:42.642695] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:44.538 [2024-11-18 18:37:42.643476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:44.538 [2024-11-18 18:37:42.643559] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:44.538 [2024-11-18 18:37:42.657625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:44.538 [2024-11-18 18:37:42.657678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:44.538 [2024-11-18 18:37:42.657697] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:44.538 [2024-11-18 18:37:42.657709] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:44.538 [2024-11-18 18:37:42.657783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.657806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.657820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.657860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:44.538 [2024-11-18 18:37:42.657902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.665637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.665663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.665676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.665690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.538 [2024-11-18 18:37:42.665743] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:44.538 [2024-11-18 18:37:42.665769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:44.538 [2024-11-18 18:37:42.665786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:44.538 [2024-11-18 18:37:42.665814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.665829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.665855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.665883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.538 [2024-11-18 18:37:42.665919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.666104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.666128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.666142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.538 [2024-11-18 18:37:42.666185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:44.538 [2024-11-18 18:37:42.666212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:44.538 [2024-11-18 18:37:42.666240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.666292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.538 [2024-11-18 18:37:42.666327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.666466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.666489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.666501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.538 [2024-11-18 18:37:42.666542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:44.538 [2024-11-18 18:37:42.666567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:44.538 [2024-11-18 18:37:42.666588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.666652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.538 [2024-11-18 18:37:42.666686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.666832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.666854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.666866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.538 [2024-11-18 18:37:42.666894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:44.538 [2024-11-18 18:37:42.666922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.666958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.666978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.538 [2024-11-18 18:37:42.667010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.667153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.667174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.667187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.667198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.538 [2024-11-18 18:37:42.667214] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:44.538 [2024-11-18 18:37:42.667235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:44.538 [2024-11-18 18:37:42.667264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:44.538 [2024-11-18 18:37:42.667382] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:44.538 [2024-11-18 18:37:42.667397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:44.538 [2024-11-18 18:37:42.667437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.667452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.667468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.538 [2024-11-18 18:37:42.667489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.538 [2024-11-18 18:37:42.667521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.538 [2024-11-18 18:37:42.667692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.538 [2024-11-18 18:37:42.667714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.538 [2024-11-18 18:37:42.667727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.538 [2024-11-18 18:37:42.667738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.539 [2024-11-18 18:37:42.667754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:44.539 [2024-11-18 18:37:42.667787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.667804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.667817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.667836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.539 [2024-11-18 18:37:42.667868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.539 [2024-11-18 18:37:42.668000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.539 [2024-11-18 18:37:42.668021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.539 [2024-11-18 18:37:42.668034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.539 [2024-11-18 18:37:42.668060] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:44.539 [2024-11-18 18:37:42.668080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.668105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:44.539 [2024-11-18 18:37:42.668135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.668169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.668213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.539 [2024-11-18 18:37:42.668250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.539 [2024-11-18 18:37:42.668428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.539 [2024-11-18 18:37:42.668450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.539 [2024-11-18 18:37:42.668462] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668480] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:44.539 [2024-11-18 18:37:42.668496] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.539 [2024-11-18 18:37:42.668510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668540] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668557] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.539 [2024-11-18 18:37:42.668601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.539 [2024-11-18 18:37:42.668623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.539 [2024-11-18 18:37:42.668661] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:44.539 [2024-11-18 18:37:42.668679] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:44.539 [2024-11-18 18:37:42.668697] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:44.539 [2024-11-18 18:37:42.668715] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:44.539 [2024-11-18 18:37:42.668730] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:44.539 [2024-11-18 18:37:42.668749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.668792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.668814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.668856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.668881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.539 [2024-11-18 18:37:42.668916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.539 [2024-11-18 18:37:42.669064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.539 [2024-11-18 18:37:42.669086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.539 [2024-11-18 18:37:42.669098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.539 [2024-11-18 18:37:42.669135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.669185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.539 [2024-11-18 18:37:42.669203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.669251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.539 [2024-11-18 18:37:42.669268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.669307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.539 [2024-11-18 18:37:42.669344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.669386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.539 [2024-11-18 18:37:42.669414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.669441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:44.539 [2024-11-18 18:37:42.669478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.669492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.539 [2024-11-18 18:37:42.669511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.539 [2024-11-18 18:37:42.669544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.539 [2024-11-18 18:37:42.669578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:44.539 [2024-11-18 18:37:42.669591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:44.539 [2024-11-18 18:37:42.669603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.539 [2024-11-18 18:37:42.673641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.539 [2024-11-18 18:37:42.673667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.539 [2024-11-18 18:37:42.673685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.539 [2024-11-18 18:37:42.673697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.673708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.539 [2024-11-18 18:37:42.673730] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:44.539 [2024-11-18 18:37:42.673750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:44.539 [2024-11-18 18:37:42.673798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.539 [2024-11-18 18:37:42.673816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.540 [2024-11-18 18:37:42.673837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.540 [2024-11-18 18:37:42.673870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.540 [2024-11-18 18:37:42.674034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.540 [2024-11-18 18:37:42.674063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.540 [2024-11-18 18:37:42.674078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.674090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.540 [2024-11-18 18:37:42.674108] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.540 [2024-11-18 18:37:42.674121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.674154] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.674171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.714751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.540 [2024-11-18 18:37:42.714782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.540 [2024-11-18 18:37:42.714796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.714810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.540 [2024-11-18 18:37:42.714850] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:44.540 [2024-11-18 18:37:42.714923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.714950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.540 [2024-11-18 18:37:42.714974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.540 [2024-11-18 18:37:42.715002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.540 [2024-11-18 18:37:42.715062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.540 [2024-11-18 18:37:42.715097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.540 [2024-11-18 18:37:42.715135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.540 [2024-11-18 18:37:42.715402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.540 [2024-11-18 18:37:42.715425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.540 [2024-11-18 18:37:42.715437] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715449] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:44.540 [2024-11-18 18:37:42.715463] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:44.540 [2024-11-18 18:37:42.715476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715521] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.540 [2024-11-18 18:37:42.715553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.540 [2024-11-18 18:37:42.715565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.715577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.540 [2024-11-18 18:37:42.755732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.540 [2024-11-18 18:37:42.755763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.540 [2024-11-18 18:37:42.755776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.755793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.540 [2024-11-18 18:37:42.755844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.755864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.540 [2024-11-18 18:37:42.755887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.540 [2024-11-18 18:37:42.755932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.540 [2024-11-18 18:37:42.756103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.540 [2024-11-18 18:37:42.756125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.540 [2024-11-18 18:37:42.756138] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756149] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:44.540 [2024-11-18 18:37:42.756161] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:44.540 [2024-11-18 18:37:42.756173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756203] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756219] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.540 [2024-11-18 18:37:42.756255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.540 [2024-11-18 18:37:42.756279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.540 [2024-11-18 18:37:42.756320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.756337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.540 [2024-11-18 18:37:42.756365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.540 [2024-11-18 18:37:42.756408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.540 [2024-11-18 18:37:42.756574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.540 [2024-11-18 18:37:42.756596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.540 [2024-11-18 18:37:42.760621] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.760641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:44.540 [2024-11-18 18:37:42.760654] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:44.540 [2024-11-18 18:37:42.760666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.760692] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.760707] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.798632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.540 [2024-11-18 18:37:42.798677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.540 [2024-11-18 18:37:42.798690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.540 [2024-11-18 18:37:42.798703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.540 ===================================================== 00:30:44.540 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:44.540 ===================================================== 00:30:44.540 Controller Capabilities/Features 00:30:44.540 ================================ 00:30:44.540 Vendor ID: 0000 00:30:44.540 Subsystem Vendor ID: 0000 00:30:44.540 Serial Number: .................... 00:30:44.540 Model Number: ........................................ 00:30:44.540 Firmware Version: 25.01 00:30:44.540 Recommended Arb Burst: 0 00:30:44.540 IEEE OUI Identifier: 00 00 00 00:30:44.540 Multi-path I/O 00:30:44.540 May have multiple subsystem ports: No 00:30:44.540 May have multiple controllers: No 00:30:44.540 Associated with SR-IOV VF: No 00:30:44.540 Max Data Transfer Size: 131072 00:30:44.540 Max Number of Namespaces: 0 00:30:44.540 Max Number of I/O Queues: 1024 00:30:44.540 NVMe Specification Version (VS): 1.3 00:30:44.540 NVMe Specification Version (Identify): 1.3 00:30:44.540 Maximum Queue Entries: 128 00:30:44.540 Contiguous Queues Required: Yes 00:30:44.540 Arbitration Mechanisms Supported 00:30:44.540 Weighted Round Robin: Not Supported 00:30:44.540 Vendor Specific: Not Supported 00:30:44.540 Reset Timeout: 15000 ms 00:30:44.540 Doorbell Stride: 4 bytes 00:30:44.540 NVM Subsystem Reset: Not Supported 00:30:44.540 Command Sets Supported 00:30:44.540 NVM Command Set: Supported 00:30:44.541 Boot Partition: Not Supported 00:30:44.541 Memory Page Size Minimum: 4096 bytes 00:30:44.541 Memory Page Size Maximum: 4096 bytes 00:30:44.541 Persistent Memory Region: Not Supported 00:30:44.541 Optional Asynchronous Events Supported 00:30:44.541 Namespace Attribute Notices: Not Supported 00:30:44.541 Firmware Activation Notices: Not Supported 00:30:44.541 ANA Change Notices: Not Supported 00:30:44.541 PLE Aggregate Log Change Notices: Not Supported 00:30:44.541 LBA Status Info Alert Notices: Not Supported 00:30:44.541 EGE Aggregate Log Change Notices: Not Supported 00:30:44.541 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.541 Zone Descriptor Change Notices: Not Supported 00:30:44.541 Discovery Log Change Notices: Supported 00:30:44.541 Controller Attributes 00:30:44.541 128-bit Host Identifier: Not Supported 00:30:44.541 Non-Operational Permissive Mode: Not Supported 00:30:44.541 NVM Sets: Not Supported 00:30:44.541 Read Recovery Levels: Not Supported 00:30:44.541 Endurance Groups: Not Supported 00:30:44.541 Predictable Latency Mode: Not Supported 00:30:44.541 Traffic Based Keep ALive: Not Supported 00:30:44.541 Namespace Granularity: Not Supported 00:30:44.541 SQ Associations: Not Supported 00:30:44.541 UUID List: Not Supported 00:30:44.541 Multi-Domain Subsystem: Not Supported 00:30:44.541 Fixed Capacity Management: Not Supported 00:30:44.541 Variable Capacity Management: Not Supported 00:30:44.541 Delete Endurance Group: Not Supported 00:30:44.541 Delete NVM Set: Not Supported 00:30:44.541 Extended LBA Formats Supported: Not Supported 00:30:44.541 Flexible Data Placement Supported: Not Supported 00:30:44.541 00:30:44.541 Controller Memory Buffer Support 00:30:44.541 ================================ 00:30:44.541 Supported: No 00:30:44.541 00:30:44.541 Persistent Memory Region Support 00:30:44.541 ================================ 00:30:44.541 Supported: No 00:30:44.541 00:30:44.541 Admin Command Set Attributes 00:30:44.541 ============================ 00:30:44.541 Security Send/Receive: Not Supported 00:30:44.541 Format NVM: Not Supported 00:30:44.541 Firmware Activate/Download: Not Supported 00:30:44.541 Namespace Management: Not Supported 00:30:44.541 Device Self-Test: Not Supported 00:30:44.541 Directives: Not Supported 00:30:44.541 NVMe-MI: Not Supported 00:30:44.541 Virtualization Management: Not Supported 00:30:44.541 Doorbell Buffer Config: Not Supported 00:30:44.541 Get LBA Status Capability: Not Supported 00:30:44.541 Command & Feature Lockdown Capability: Not Supported 00:30:44.541 Abort Command Limit: 1 00:30:44.541 Async Event Request Limit: 4 00:30:44.541 Number of Firmware Slots: N/A 00:30:44.541 Firmware Slot 1 Read-Only: N/A 00:30:44.541 Firmware Activation Without Reset: N/A 00:30:44.541 Multiple Update Detection Support: N/A 00:30:44.541 Firmware Update Granularity: No Information Provided 00:30:44.541 Per-Namespace SMART Log: No 00:30:44.541 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:44.541 Command Effects Log Page: Not Supported 00:30:44.541 Get Log Page Extended Data: Supported 00:30:44.541 Telemetry Log Pages: Not Supported 00:30:44.541 Persistent Event Log Pages: Not Supported 00:30:44.541 Supported Log Pages Log Page: May Support 00:30:44.541 Commands Supported & Effects Log Page: Not Supported 00:30:44.541 Feature Identifiers & Effects Log Page:May Support 00:30:44.541 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.541 Data Area 4 for Telemetry Log: Not Supported 00:30:44.541 Error Log Page Entries Supported: 128 00:30:44.541 Keep Alive: Not Supported 00:30:44.541 00:30:44.541 NVM Command Set Attributes 00:30:44.541 ========================== 00:30:44.541 Submission Queue Entry Size 00:30:44.541 Max: 1 00:30:44.541 Min: 1 00:30:44.541 Completion Queue Entry Size 00:30:44.541 Max: 1 00:30:44.541 Min: 1 00:30:44.541 Number of Namespaces: 0 00:30:44.541 Compare Command: Not Supported 00:30:44.541 Write Uncorrectable Command: Not Supported 00:30:44.541 Dataset Management Command: Not Supported 00:30:44.541 Write Zeroes Command: Not Supported 00:30:44.541 Set Features Save Field: Not Supported 00:30:44.541 Reservations: Not Supported 00:30:44.541 Timestamp: Not Supported 00:30:44.541 Copy: Not Supported 00:30:44.541 Volatile Write Cache: Not Present 00:30:44.541 Atomic Write Unit (Normal): 1 00:30:44.541 Atomic Write Unit (PFail): 1 00:30:44.541 Atomic Compare & Write Unit: 1 00:30:44.541 Fused Compare & Write: Supported 00:30:44.541 Scatter-Gather List 00:30:44.541 SGL Command Set: Supported 00:30:44.541 SGL Keyed: Supported 00:30:44.541 SGL Bit Bucket Descriptor: Not Supported 00:30:44.541 SGL Metadata Pointer: Not Supported 00:30:44.541 Oversized SGL: Not Supported 00:30:44.541 SGL Metadata Address: Not Supported 00:30:44.541 SGL Offset: Supported 00:30:44.541 Transport SGL Data Block: Not Supported 00:30:44.541 Replay Protected Memory Block: Not Supported 00:30:44.541 00:30:44.541 Firmware Slot Information 00:30:44.541 ========================= 00:30:44.541 Active slot: 0 00:30:44.541 00:30:44.541 00:30:44.541 Error Log 00:30:44.541 ========= 00:30:44.541 00:30:44.541 Active Namespaces 00:30:44.541 ================= 00:30:44.541 Discovery Log Page 00:30:44.541 ================== 00:30:44.541 Generation Counter: 2 00:30:44.541 Number of Records: 2 00:30:44.541 Record Format: 0 00:30:44.541 00:30:44.541 Discovery Log Entry 0 00:30:44.541 ---------------------- 00:30:44.541 Transport Type: 3 (TCP) 00:30:44.541 Address Family: 1 (IPv4) 00:30:44.541 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:44.541 Entry Flags: 00:30:44.541 Duplicate Returned Information: 1 00:30:44.541 Explicit Persistent Connection Support for Discovery: 1 00:30:44.541 Transport Requirements: 00:30:44.541 Secure Channel: Not Required 00:30:44.541 Port ID: 0 (0x0000) 00:30:44.541 Controller ID: 65535 (0xffff) 00:30:44.541 Admin Max SQ Size: 128 00:30:44.541 Transport Service Identifier: 4420 00:30:44.541 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:44.541 Transport Address: 10.0.0.2 00:30:44.541 Discovery Log Entry 1 00:30:44.541 ---------------------- 00:30:44.541 Transport Type: 3 (TCP) 00:30:44.541 Address Family: 1 (IPv4) 00:30:44.541 Subsystem Type: 2 (NVM Subsystem) 00:30:44.541 Entry Flags: 00:30:44.541 Duplicate Returned Information: 0 00:30:44.541 Explicit Persistent Connection Support for Discovery: 0 00:30:44.541 Transport Requirements: 00:30:44.541 Secure Channel: Not Required 00:30:44.541 Port ID: 0 (0x0000) 00:30:44.541 Controller ID: 65535 (0xffff) 00:30:44.541 Admin Max SQ Size: 128 00:30:44.542 Transport Service Identifier: 4420 00:30:44.542 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:44.542 Transport Address: 10.0.0.2 [2024-11-18 18:37:42.798899] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:44.542 [2024-11-18 18:37:42.798932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.798975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.542 [2024-11-18 18:37:42.798992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.799007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.542 [2024-11-18 18:37:42.799019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.799033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.542 [2024-11-18 18:37:42.799046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.799060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.542 [2024-11-18 18:37:42.799082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.799129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.799181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.799310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.799333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.799346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.799380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.799428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.799469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.799654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.799676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.799688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.799720] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:44.542 [2024-11-18 18:37:42.799740] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:44.542 [2024-11-18 18:37:42.799767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.799796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.799815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.799848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.800008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.800035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.800048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.800088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.800135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.800166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.800324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.800345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.800358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.800396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.800442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.800473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.800618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.800641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.800653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.800692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.800738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.800770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.800919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.800949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.800963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.800975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.801003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.801019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.801030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.801049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.542 [2024-11-18 18:37:42.801080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.542 [2024-11-18 18:37:42.801217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.542 [2024-11-18 18:37:42.801244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.542 [2024-11-18 18:37:42.801257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.801269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.542 [2024-11-18 18:37:42.801296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.801312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.542 [2024-11-18 18:37:42.801324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.542 [2024-11-18 18:37:42.801347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.801380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.801485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.801505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.801517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.801555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.543 [2024-11-18 18:37:42.801601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.801643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.801753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.801774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.801785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.801823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.801850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.543 [2024-11-18 18:37:42.801869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.801900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.802001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.802022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.802034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.802072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.543 [2024-11-18 18:37:42.802118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.802149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.802258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.802279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.802295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.802349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.543 [2024-11-18 18:37:42.802395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.802426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.802560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.802582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.802594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.802605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.806671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.806689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.806700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.543 [2024-11-18 18:37:42.806719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.543 [2024-11-18 18:37:42.806751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.543 [2024-11-18 18:37:42.806890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.543 [2024-11-18 18:37:42.806913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.543 [2024-11-18 18:37:42.806925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.543 [2024-11-18 18:37:42.806936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.543 [2024-11-18 18:37:42.806959] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:44.543 00:30:44.543 18:37:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:44.804 [2024-11-18 18:37:42.909062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:44.804 [2024-11-18 18:37:42.909158] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070968 ] 00:30:44.804 [2024-11-18 18:37:42.987343] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:44.804 [2024-11-18 18:37:42.987471] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:44.804 [2024-11-18 18:37:42.987501] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:44.805 [2024-11-18 18:37:42.987537] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:44.805 [2024-11-18 18:37:42.987562] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:44.805 [2024-11-18 18:37:42.991644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:44.805 [2024-11-18 18:37:42.991731] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:44.805 [2024-11-18 18:37:42.998624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:44.805 [2024-11-18 18:37:42.998678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:44.805 [2024-11-18 18:37:42.998698] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:44.805 [2024-11-18 18:37:42.998710] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:44.805 [2024-11-18 18:37:42.998787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:42.998809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:42.998830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:42.998865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:44.805 [2024-11-18 18:37:42.998932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.005628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.005658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.005672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.005687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.005715] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:44.805 [2024-11-18 18:37:43.005739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:44.805 [2024-11-18 18:37:43.005757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:44.805 [2024-11-18 18:37:43.005797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.005813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.005829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.005853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.805 [2024-11-18 18:37:43.005899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.006042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.006071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.006087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.006132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:44.805 [2024-11-18 18:37:43.006157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:44.805 [2024-11-18 18:37:43.006179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.006232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.805 [2024-11-18 18:37:43.006268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.006378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.006400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.006417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.006447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:44.805 [2024-11-18 18:37:43.006473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.006495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.006556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.805 [2024-11-18 18:37:43.006590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.006717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.006739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.006756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.006787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.006815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.006845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.006865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.805 [2024-11-18 18:37:43.006908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.007017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.007040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.007053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.007080] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:44.805 [2024-11-18 18:37:43.007102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.007125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.007244] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:44.805 [2024-11-18 18:37:43.007259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.007283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.007352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.805 [2024-11-18 18:37:43.007406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.805 [2024-11-18 18:37:43.007537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.805 [2024-11-18 18:37:43.007563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.805 [2024-11-18 18:37:43.007578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.805 [2024-11-18 18:37:43.007638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:44.805 [2024-11-18 18:37:43.007674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.805 [2024-11-18 18:37:43.007703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.805 [2024-11-18 18:37:43.007723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.806 [2024-11-18 18:37:43.007757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.806 [2024-11-18 18:37:43.007886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.806 [2024-11-18 18:37:43.007909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.806 [2024-11-18 18:37:43.007922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.007933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.806 [2024-11-18 18:37:43.007948] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:44.806 [2024-11-18 18:37:43.007964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.007988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:44.806 [2024-11-18 18:37:43.008010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.008043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.008069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.008092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.806 [2024-11-18 18:37:43.008126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.806 [2024-11-18 18:37:43.008311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.806 [2024-11-18 18:37:43.008339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.806 [2024-11-18 18:37:43.008353] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.008366] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:44.806 [2024-11-18 18:37:43.008381] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.806 [2024-11-18 18:37:43.008395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.008426] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.008443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.052637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.806 [2024-11-18 18:37:43.052669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.806 [2024-11-18 18:37:43.052683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.052700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.806 [2024-11-18 18:37:43.052733] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:44.806 [2024-11-18 18:37:43.052753] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:44.806 [2024-11-18 18:37:43.052772] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:44.806 [2024-11-18 18:37:43.052790] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:44.806 [2024-11-18 18:37:43.052805] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:44.806 [2024-11-18 18:37:43.052818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.052850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.052873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.052888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.052917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.052945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.806 [2024-11-18 18:37:43.052982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.806 [2024-11-18 18:37:43.053116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.806 [2024-11-18 18:37:43.053140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.806 [2024-11-18 18:37:43.053152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.806 [2024-11-18 18:37:43.053186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.053244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.806 [2024-11-18 18:37:43.053273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.053315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.806 [2024-11-18 18:37:43.053332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.053371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.806 [2024-11-18 18:37:43.053403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.053471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.806 [2024-11-18 18:37:43.053492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.053537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.053559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.053602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.806 [2024-11-18 18:37:43.053648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.806 [2024-11-18 18:37:43.053690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:44.806 [2024-11-18 18:37:43.053705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:44.806 [2024-11-18 18:37:43.053718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.806 [2024-11-18 18:37:43.053731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.806 [2024-11-18 18:37:43.053880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.806 [2024-11-18 18:37:43.053903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.806 [2024-11-18 18:37:43.053915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.053927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.806 [2024-11-18 18:37:43.053955] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:44.806 [2024-11-18 18:37:43.053979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.054003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.054022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:44.806 [2024-11-18 18:37:43.054040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.054054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.806 [2024-11-18 18:37:43.054066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.806 [2024-11-18 18:37:43.054086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.807 [2024-11-18 18:37:43.054130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.807 [2024-11-18 18:37:43.054254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.054275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.054288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.054397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.054440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.054470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.807 [2024-11-18 18:37:43.054517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.807 [2024-11-18 18:37:43.054567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.807 [2024-11-18 18:37:43.054756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.807 [2024-11-18 18:37:43.054785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.807 [2024-11-18 18:37:43.054799] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054810] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.807 [2024-11-18 18:37:43.054823] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.807 [2024-11-18 18:37:43.054835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054859] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.054921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.054933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.054944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.054995] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:44.807 [2024-11-18 18:37:43.055028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.055083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.055119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.807 [2024-11-18 18:37:43.055154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.807 [2024-11-18 18:37:43.055188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.807 [2024-11-18 18:37:43.055383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.807 [2024-11-18 18:37:43.055411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.807 [2024-11-18 18:37:43.055426] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055437] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.807 [2024-11-18 18:37:43.055449] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.807 [2024-11-18 18:37:43.055461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055479] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055493] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.055530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.055542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.055617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.055658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.055687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.055718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.807 [2024-11-18 18:37:43.055738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.807 [2024-11-18 18:37:43.055774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.807 [2024-11-18 18:37:43.055941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.807 [2024-11-18 18:37:43.055969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.807 [2024-11-18 18:37:43.055996] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.807 [2024-11-18 18:37:43.056020] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.807 [2024-11-18 18:37:43.056032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056053] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056067] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.056105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.056117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.056158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056294] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:44.807 [2024-11-18 18:37:43.056312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:44.807 [2024-11-18 18:37:43.056329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:44.807 [2024-11-18 18:37:43.056390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.807 [2024-11-18 18:37:43.056427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.807 [2024-11-18 18:37:43.056467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.056498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.807 [2024-11-18 18:37:43.056517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.807 [2024-11-18 18:37:43.056550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.807 [2024-11-18 18:37:43.056590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.807 [2024-11-18 18:37:43.060638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.060683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.060697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.060710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.060730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.807 [2024-11-18 18:37:43.060747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.807 [2024-11-18 18:37:43.060758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.060769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.807 [2024-11-18 18:37:43.060798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.807 [2024-11-18 18:37:43.060815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.060835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.060869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.808 [2024-11-18 18:37:43.061005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.061028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.061040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.061078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.808 [2024-11-18 18:37:43.061259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.061281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.061294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.061333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.808 [2024-11-18 18:37:43.061526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.061548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.061560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.061630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.061842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.808 [2024-11-18 18:37:43.061862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.808 [2024-11-18 18:37:43.061914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.808 [2024-11-18 18:37:43.061941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.808 [2024-11-18 18:37:43.061982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:44.808 [2024-11-18 18:37:43.061995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.808 [2024-11-18 18:37:43.062277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.808 [2024-11-18 18:37:43.062311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.808 [2024-11-18 18:37:43.062329] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062341] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:44.808 [2024-11-18 18:37:43.062356] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:44.808 [2024-11-18 18:37:43.062369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062409] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.808 [2024-11-18 18:37:43.062465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.808 [2024-11-18 18:37:43.062477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062488] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:44.808 [2024-11-18 18:37:43.062501] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.808 [2024-11-18 18:37:43.062512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062539] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062554] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.808 [2024-11-18 18:37:43.062591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.808 [2024-11-18 18:37:43.062620] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062634] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:44.808 [2024-11-18 18:37:43.062646] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.808 [2024-11-18 18:37:43.062658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.808 [2024-11-18 18:37:43.062732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.808 [2024-11-18 18:37:43.062743] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062754] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:44.808 [2024-11-18 18:37:43.062766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.808 [2024-11-18 18:37:43.062777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062794] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062807] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.062836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.062847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.062916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.062936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.062948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.062959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.062986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.063004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.063015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.063026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:44.808 [2024-11-18 18:37:43.063045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.808 [2024-11-18 18:37:43.063062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.808 [2024-11-18 18:37:43.063073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.808 [2024-11-18 18:37:43.063083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.808 ===================================================== 00:30:44.808 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.808 ===================================================== 00:30:44.808 Controller Capabilities/Features 00:30:44.808 ================================ 00:30:44.808 Vendor ID: 8086 00:30:44.808 Subsystem Vendor ID: 8086 00:30:44.808 Serial Number: SPDK00000000000001 00:30:44.808 Model Number: SPDK bdev Controller 00:30:44.808 Firmware Version: 25.01 00:30:44.808 Recommended Arb Burst: 6 00:30:44.809 IEEE OUI Identifier: e4 d2 5c 00:30:44.809 Multi-path I/O 00:30:44.809 May have multiple subsystem ports: Yes 00:30:44.809 May have multiple controllers: Yes 00:30:44.809 Associated with SR-IOV VF: No 00:30:44.809 Max Data Transfer Size: 131072 00:30:44.809 Max Number of Namespaces: 32 00:30:44.809 Max Number of I/O Queues: 127 00:30:44.809 NVMe Specification Version (VS): 1.3 00:30:44.809 NVMe Specification Version (Identify): 1.3 00:30:44.809 Maximum Queue Entries: 128 00:30:44.809 Contiguous Queues Required: Yes 00:30:44.809 Arbitration Mechanisms Supported 00:30:44.809 Weighted Round Robin: Not Supported 00:30:44.809 Vendor Specific: Not Supported 00:30:44.809 Reset Timeout: 15000 ms 00:30:44.809 Doorbell Stride: 4 bytes 00:30:44.809 NVM Subsystem Reset: Not Supported 00:30:44.809 Command Sets Supported 00:30:44.809 NVM Command Set: Supported 00:30:44.809 Boot Partition: Not Supported 00:30:44.809 Memory Page Size Minimum: 4096 bytes 00:30:44.809 Memory Page Size Maximum: 4096 bytes 00:30:44.809 Persistent Memory Region: Not Supported 00:30:44.809 Optional Asynchronous Events Supported 00:30:44.809 Namespace Attribute Notices: Supported 00:30:44.809 Firmware Activation Notices: Not Supported 00:30:44.809 ANA Change Notices: Not Supported 00:30:44.809 PLE Aggregate Log Change Notices: Not Supported 00:30:44.809 LBA Status Info Alert Notices: Not Supported 00:30:44.809 EGE Aggregate Log Change Notices: Not Supported 00:30:44.809 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.809 Zone Descriptor Change Notices: Not Supported 00:30:44.809 Discovery Log Change Notices: Not Supported 00:30:44.809 Controller Attributes 00:30:44.809 128-bit Host Identifier: Supported 00:30:44.809 Non-Operational Permissive Mode: Not Supported 00:30:44.809 NVM Sets: Not Supported 00:30:44.809 Read Recovery Levels: Not Supported 00:30:44.809 Endurance Groups: Not Supported 00:30:44.809 Predictable Latency Mode: Not Supported 00:30:44.809 Traffic Based Keep ALive: Not Supported 00:30:44.809 Namespace Granularity: Not Supported 00:30:44.809 SQ Associations: Not Supported 00:30:44.809 UUID List: Not Supported 00:30:44.809 Multi-Domain Subsystem: Not Supported 00:30:44.809 Fixed Capacity Management: Not Supported 00:30:44.809 Variable Capacity Management: Not Supported 00:30:44.809 Delete Endurance Group: Not Supported 00:30:44.809 Delete NVM Set: Not Supported 00:30:44.809 Extended LBA Formats Supported: Not Supported 00:30:44.809 Flexible Data Placement Supported: Not Supported 00:30:44.809 00:30:44.809 Controller Memory Buffer Support 00:30:44.809 ================================ 00:30:44.809 Supported: No 00:30:44.809 00:30:44.809 Persistent Memory Region Support 00:30:44.809 ================================ 00:30:44.809 Supported: No 00:30:44.809 00:30:44.809 Admin Command Set Attributes 00:30:44.809 ============================ 00:30:44.809 Security Send/Receive: Not Supported 00:30:44.809 Format NVM: Not Supported 00:30:44.809 Firmware Activate/Download: Not Supported 00:30:44.809 Namespace Management: Not Supported 00:30:44.809 Device Self-Test: Not Supported 00:30:44.809 Directives: Not Supported 00:30:44.809 NVMe-MI: Not Supported 00:30:44.809 Virtualization Management: Not Supported 00:30:44.809 Doorbell Buffer Config: Not Supported 00:30:44.809 Get LBA Status Capability: Not Supported 00:30:44.809 Command & Feature Lockdown Capability: Not Supported 00:30:44.809 Abort Command Limit: 4 00:30:44.809 Async Event Request Limit: 4 00:30:44.809 Number of Firmware Slots: N/A 00:30:44.809 Firmware Slot 1 Read-Only: N/A 00:30:44.809 Firmware Activation Without Reset: N/A 00:30:44.809 Multiple Update Detection Support: N/A 00:30:44.809 Firmware Update Granularity: No Information Provided 00:30:44.809 Per-Namespace SMART Log: No 00:30:44.809 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.809 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:44.809 Command Effects Log Page: Supported 00:30:44.809 Get Log Page Extended Data: Supported 00:30:44.809 Telemetry Log Pages: Not Supported 00:30:44.809 Persistent Event Log Pages: Not Supported 00:30:44.809 Supported Log Pages Log Page: May Support 00:30:44.809 Commands Supported & Effects Log Page: Not Supported 00:30:44.809 Feature Identifiers & Effects Log Page:May Support 00:30:44.809 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.809 Data Area 4 for Telemetry Log: Not Supported 00:30:44.809 Error Log Page Entries Supported: 128 00:30:44.809 Keep Alive: Supported 00:30:44.809 Keep Alive Granularity: 10000 ms 00:30:44.809 00:30:44.809 NVM Command Set Attributes 00:30:44.809 ========================== 00:30:44.809 Submission Queue Entry Size 00:30:44.809 Max: 64 00:30:44.809 Min: 64 00:30:44.809 Completion Queue Entry Size 00:30:44.809 Max: 16 00:30:44.809 Min: 16 00:30:44.809 Number of Namespaces: 32 00:30:44.809 Compare Command: Supported 00:30:44.809 Write Uncorrectable Command: Not Supported 00:30:44.809 Dataset Management Command: Supported 00:30:44.809 Write Zeroes Command: Supported 00:30:44.809 Set Features Save Field: Not Supported 00:30:44.809 Reservations: Supported 00:30:44.809 Timestamp: Not Supported 00:30:44.809 Copy: Supported 00:30:44.809 Volatile Write Cache: Present 00:30:44.809 Atomic Write Unit (Normal): 1 00:30:44.809 Atomic Write Unit (PFail): 1 00:30:44.809 Atomic Compare & Write Unit: 1 00:30:44.809 Fused Compare & Write: Supported 00:30:44.809 Scatter-Gather List 00:30:44.809 SGL Command Set: Supported 00:30:44.809 SGL Keyed: Supported 00:30:44.809 SGL Bit Bucket Descriptor: Not Supported 00:30:44.809 SGL Metadata Pointer: Not Supported 00:30:44.809 Oversized SGL: Not Supported 00:30:44.809 SGL Metadata Address: Not Supported 00:30:44.809 SGL Offset: Supported 00:30:44.809 Transport SGL Data Block: Not Supported 00:30:44.809 Replay Protected Memory Block: Not Supported 00:30:44.809 00:30:44.809 Firmware Slot Information 00:30:44.809 ========================= 00:30:44.809 Active slot: 1 00:30:44.809 Slot 1 Firmware Revision: 25.01 00:30:44.809 00:30:44.809 00:30:44.809 Commands Supported and Effects 00:30:44.809 ============================== 00:30:44.809 Admin Commands 00:30:44.809 -------------- 00:30:44.809 Get Log Page (02h): Supported 00:30:44.809 Identify (06h): Supported 00:30:44.809 Abort (08h): Supported 00:30:44.809 Set Features (09h): Supported 00:30:44.809 Get Features (0Ah): Supported 00:30:44.809 Asynchronous Event Request (0Ch): Supported 00:30:44.809 Keep Alive (18h): Supported 00:30:44.809 I/O Commands 00:30:44.809 ------------ 00:30:44.809 Flush (00h): Supported LBA-Change 00:30:44.809 Write (01h): Supported LBA-Change 00:30:44.809 Read (02h): Supported 00:30:44.809 Compare (05h): Supported 00:30:44.809 Write Zeroes (08h): Supported LBA-Change 00:30:44.809 Dataset Management (09h): Supported LBA-Change 00:30:44.809 Copy (19h): Supported LBA-Change 00:30:44.809 00:30:44.809 Error Log 00:30:44.809 ========= 00:30:44.809 00:30:44.809 Arbitration 00:30:44.809 =========== 00:30:44.809 Arbitration Burst: 1 00:30:44.809 00:30:44.809 Power Management 00:30:44.810 ================ 00:30:44.810 Number of Power States: 1 00:30:44.810 Current Power State: Power State #0 00:30:44.810 Power State #0: 00:30:44.810 Max Power: 0.00 W 00:30:44.810 Non-Operational State: Operational 00:30:44.810 Entry Latency: Not Reported 00:30:44.810 Exit Latency: Not Reported 00:30:44.810 Relative Read Throughput: 0 00:30:44.810 Relative Read Latency: 0 00:30:44.810 Relative Write Throughput: 0 00:30:44.810 Relative Write Latency: 0 00:30:44.810 Idle Power: Not Reported 00:30:44.810 Active Power: Not Reported 00:30:44.810 Non-Operational Permissive Mode: Not Supported 00:30:44.810 00:30:44.810 Health Information 00:30:44.810 ================== 00:30:44.810 Critical Warnings: 00:30:44.810 Available Spare Space: OK 00:30:44.810 Temperature: OK 00:30:44.810 Device Reliability: OK 00:30:44.810 Read Only: No 00:30:44.810 Volatile Memory Backup: OK 00:30:44.810 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:44.810 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:44.810 Available Spare: 0% 00:30:44.810 Available Spare Threshold: 0% 00:30:44.810 Life Percentage Used:[2024-11-18 18:37:43.063292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.063312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.810 [2024-11-18 18:37:43.063334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.810 [2024-11-18 18:37:43.063368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.810 [2024-11-18 18:37:43.063528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.810 [2024-11-18 18:37:43.063555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.810 [2024-11-18 18:37:43.063569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.063587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.063677] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:44.810 [2024-11-18 18:37:43.063710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.063732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.810 [2024-11-18 18:37:43.063748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.810 [2024-11-18 18:37:43.063777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.063791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.810 [2024-11-18 18:37:43.063820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.063835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.810 [2024-11-18 18:37:43.063856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.063870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.063882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.810 [2024-11-18 18:37:43.063909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.810 [2024-11-18 18:37:43.063944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.810 [2024-11-18 18:37:43.064089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.810 [2024-11-18 18:37:43.064128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.810 [2024-11-18 18:37:43.064141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.064182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.810 [2024-11-18 18:37:43.064229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.810 [2024-11-18 18:37:43.064270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.810 [2024-11-18 18:37:43.064433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.810 [2024-11-18 18:37:43.064454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.810 [2024-11-18 18:37:43.064467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.064494] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:44.810 [2024-11-18 18:37:43.064508] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:44.810 [2024-11-18 18:37:43.064542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.064575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.810 [2024-11-18 18:37:43.064603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.810 [2024-11-18 18:37:43.068674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.810 [2024-11-18 18:37:43.068786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.810 [2024-11-18 18:37:43.068809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.810 [2024-11-18 18:37:43.068822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.810 [2024-11-18 18:37:43.068834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.810 [2024-11-18 18:37:43.068859] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:30:44.810 0% 00:30:44.810 Data Units Read: 0 00:30:44.810 Data Units Written: 0 00:30:44.810 Host Read Commands: 0 00:30:44.810 Host Write Commands: 0 00:30:44.811 Controller Busy Time: 0 minutes 00:30:44.811 Power Cycles: 0 00:30:44.811 Power On Hours: 0 hours 00:30:44.811 Unsafe Shutdowns: 0 00:30:44.811 Unrecoverable Media Errors: 0 00:30:44.811 Lifetime Error Log Entries: 0 00:30:44.811 Warning Temperature Time: 0 minutes 00:30:44.811 Critical Temperature Time: 0 minutes 00:30:44.811 00:30:44.811 Number of Queues 00:30:44.811 ================ 00:30:44.811 Number of I/O Submission Queues: 127 00:30:44.811 Number of I/O Completion Queues: 127 00:30:44.811 00:30:44.811 Active Namespaces 00:30:44.811 ================= 00:30:44.811 Namespace ID:1 00:30:44.811 Error Recovery Timeout: Unlimited 00:30:44.811 Command Set Identifier: NVM (00h) 00:30:44.811 Deallocate: Supported 00:30:44.811 Deallocated/Unwritten Error: Not Supported 00:30:44.811 Deallocated Read Value: Unknown 00:30:44.811 Deallocate in Write Zeroes: Not Supported 00:30:44.811 Deallocated Guard Field: 0xFFFF 00:30:44.811 Flush: Supported 00:30:44.811 Reservation: Supported 00:30:44.811 Namespace Sharing Capabilities: Multiple Controllers 00:30:44.811 Size (in LBAs): 131072 (0GiB) 00:30:44.811 Capacity (in LBAs): 131072 (0GiB) 00:30:44.811 Utilization (in LBAs): 131072 (0GiB) 00:30:44.811 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:44.811 EUI64: ABCDEF0123456789 00:30:44.811 UUID: 5266cd23-595d-49c5-bad9-ff33e1faa5d9 00:30:44.811 Thin Provisioning: Not Supported 00:30:44.811 Per-NS Atomic Units: Yes 00:30:44.811 Atomic Boundary Size (Normal): 0 00:30:44.811 Atomic Boundary Size (PFail): 0 00:30:44.811 Atomic Boundary Offset: 0 00:30:44.811 Maximum Single Source Range Length: 65535 00:30:44.811 Maximum Copy Length: 65535 00:30:44.811 Maximum Source Range Count: 1 00:30:44.811 NGUID/EUI64 Never Reused: No 00:30:44.811 Namespace Write Protected: No 00:30:44.811 Number of LBA Formats: 1 00:30:44.811 Current LBA Format: LBA Format #00 00:30:44.811 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:44.811 00:30:44.811 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:44.811 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.811 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.811 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.069 rmmod nvme_tcp 00:30:45.069 rmmod nvme_fabrics 00:30:45.069 rmmod nvme_keyring 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3070801 ']' 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3070801 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3070801 ']' 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3070801 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3070801 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3070801' 00:30:45.069 killing process with pid 3070801 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3070801 00:30:45.069 18:37:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3070801 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.442 18:37:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.341 00:30:48.341 real 0m7.626s 00:30:48.341 user 0m11.364s 00:30:48.341 sys 0m2.238s 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:48.341 ************************************ 00:30:48.341 END TEST nvmf_identify 00:30:48.341 ************************************ 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.341 ************************************ 00:30:48.341 START TEST nvmf_perf 00:30:48.341 ************************************ 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.341 * Looking for test storage... 00:30:48.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.341 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.599 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.599 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.599 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.599 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.599 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.600 --rc genhtml_branch_coverage=1 00:30:48.600 --rc genhtml_function_coverage=1 00:30:48.600 --rc genhtml_legend=1 00:30:48.600 --rc geninfo_all_blocks=1 00:30:48.600 --rc geninfo_unexecuted_blocks=1 00:30:48.600 00:30:48.600 ' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.600 --rc genhtml_branch_coverage=1 00:30:48.600 --rc genhtml_function_coverage=1 00:30:48.600 --rc genhtml_legend=1 00:30:48.600 --rc geninfo_all_blocks=1 00:30:48.600 --rc geninfo_unexecuted_blocks=1 00:30:48.600 00:30:48.600 ' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.600 --rc genhtml_branch_coverage=1 00:30:48.600 --rc genhtml_function_coverage=1 00:30:48.600 --rc genhtml_legend=1 00:30:48.600 --rc geninfo_all_blocks=1 00:30:48.600 --rc geninfo_unexecuted_blocks=1 00:30:48.600 00:30:48.600 ' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.600 --rc genhtml_branch_coverage=1 00:30:48.600 --rc genhtml_function_coverage=1 00:30:48.600 --rc genhtml_legend=1 00:30:48.600 --rc geninfo_all_blocks=1 00:30:48.600 --rc geninfo_unexecuted_blocks=1 00:30:48.600 00:30:48.600 ' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:48.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.600 18:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:50.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.499 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:50.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:50.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:50.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.500 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:30:50.758 00:30:50.758 --- 10.0.0.2 ping statistics --- 00:30:50.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.758 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:30:50.758 00:30:50.758 --- 10.0.0.1 ping statistics --- 00:30:50.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.758 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3073151 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3073151 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3073151 ']' 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.758 18:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.758 [2024-11-18 18:37:49.017601] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:50.758 [2024-11-18 18:37:49.017771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.029 [2024-11-18 18:37:49.160732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.029 [2024-11-18 18:37:49.296384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.029 [2024-11-18 18:37:49.296459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.029 [2024-11-18 18:37:49.296485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.029 [2024-11-18 18:37:49.296514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.029 [2024-11-18 18:37:49.296534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.029 [2024-11-18 18:37:49.299290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.029 [2024-11-18 18:37:49.299363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.029 [2024-11-18 18:37:49.299456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.029 [2024-11-18 18:37:49.299463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.994 18:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.994 18:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:51.994 18:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.994 18:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.994 18:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:51.994 18:37:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.994 18:37:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:51.994 18:37:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:55.299 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:55.299 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:55.299 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:55.299 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:55.557 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:55.557 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:55.557 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:55.557 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:55.557 18:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:55.815 [2024-11-18 18:37:54.078118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.815 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:56.072 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:56.072 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:56.330 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:56.330 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:56.896 18:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.896 [2024-11-18 18:37:55.192625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.896 18:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.153 18:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:57.153 18:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:57.153 18:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:57.153 18:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:59.048 Initializing NVMe Controllers 00:30:59.048 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:59.048 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:59.048 Initialization complete. Launching workers. 00:30:59.048 ======================================================== 00:30:59.048 Latency(us) 00:30:59.048 Device Information : IOPS MiB/s Average min max 00:30:59.048 PCIE (0000:88:00.0) NSID 1 from core 0: 74396.91 290.61 429.29 43.06 7269.59 00:30:59.048 ======================================================== 00:30:59.048 Total : 74396.91 290.61 429.29 43.06 7269.59 00:30:59.048 00:30:59.048 18:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.417 Initializing NVMe Controllers 00:31:00.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.417 Initialization complete. Launching workers. 00:31:00.417 ======================================================== 00:31:00.417 Latency(us) 00:31:00.417 Device Information : IOPS MiB/s Average min max 00:31:00.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13726.54 203.61 45756.43 00:31:00.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18648.13 4983.68 47949.54 00:31:00.417 ======================================================== 00:31:00.417 Total : 131.00 0.51 15830.43 203.61 47949.54 00:31:00.417 00:31:00.417 18:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.793 Initializing NVMe Controllers 00:31:01.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.793 Initialization complete. Launching workers. 00:31:01.793 ======================================================== 00:31:01.793 Latency(us) 00:31:01.793 Device Information : IOPS MiB/s Average min max 00:31:01.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5476.98 21.39 5871.19 791.33 12117.44 00:31:01.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3847.99 15.03 8355.73 6246.19 16202.55 00:31:01.793 ======================================================== 00:31:01.793 Total : 9324.97 36.43 6896.45 791.33 16202.55 00:31:01.793 00:31:01.793 18:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:01.793 18:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:01.793 18:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:05.073 Initializing NVMe Controllers 00:31:05.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.073 Controller IO queue size 128, less than required. 00:31:05.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.073 Controller IO queue size 128, less than required. 00:31:05.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:05.073 Initialization complete. Launching workers. 00:31:05.073 ======================================================== 00:31:05.073 Latency(us) 00:31:05.073 Device Information : IOPS MiB/s Average min max 00:31:05.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1336.99 334.25 100512.54 64238.01 291850.14 00:31:05.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 536.90 134.22 257091.49 136165.21 507962.42 00:31:05.073 ======================================================== 00:31:05.073 Total : 1873.89 468.47 145374.58 64238.01 507962.42 00:31:05.073 00:31:05.073 18:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:05.073 No valid NVMe controllers or AIO or URING devices found 00:31:05.073 Initializing NVMe Controllers 00:31:05.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.073 Controller IO queue size 128, less than required. 00:31:05.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:05.073 Controller IO queue size 128, less than required. 00:31:05.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:05.073 WARNING: Some requested NVMe devices were skipped 00:31:05.073 18:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:08.354 Initializing NVMe Controllers 00:31:08.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.354 Controller IO queue size 128, less than required. 00:31:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.354 Controller IO queue size 128, less than required. 00:31:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:08.354 Initialization complete. Launching workers. 00:31:08.354 00:31:08.354 ==================== 00:31:08.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:08.354 TCP transport: 00:31:08.354 polls: 6311 00:31:08.354 idle_polls: 3931 00:31:08.354 sock_completions: 2380 00:31:08.354 nvme_completions: 4375 00:31:08.354 submitted_requests: 6558 00:31:08.354 queued_requests: 1 00:31:08.354 00:31:08.354 ==================== 00:31:08.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:08.354 TCP transport: 00:31:08.354 polls: 7214 00:31:08.354 idle_polls: 4271 00:31:08.354 sock_completions: 2943 00:31:08.354 nvme_completions: 5131 00:31:08.354 submitted_requests: 7614 00:31:08.354 queued_requests: 1 00:31:08.354 ======================================================== 00:31:08.354 Latency(us) 00:31:08.354 Device Information : IOPS MiB/s Average min max 00:31:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1091.94 272.98 121083.14 89866.73 286705.01 00:31:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1280.67 320.17 106167.00 61898.01 413902.00 00:31:08.354 ======================================================== 00:31:08.354 Total : 2372.60 593.15 113031.82 61898.01 413902.00 00:31:08.354 00:31:08.354 18:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:08.354 18:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.354 18:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:08.354 18:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:08.354 18:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=18e5b85e-8651-4df4-b77c-b6c86e740040 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 18e5b85e-8651-4df4-b77c-b6c86e740040 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=18e5b85e-8651-4df4-b77c-b6c86e740040 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:12.537 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:12.538 { 00:31:12.538 "uuid": "18e5b85e-8651-4df4-b77c-b6c86e740040", 00:31:12.538 "name": "lvs_0", 00:31:12.538 "base_bdev": "Nvme0n1", 00:31:12.538 "total_data_clusters": 238234, 00:31:12.538 "free_clusters": 238234, 00:31:12.538 "block_size": 512, 00:31:12.538 "cluster_size": 4194304 00:31:12.538 } 00:31:12.538 ]' 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="18e5b85e-8651-4df4-b77c-b6c86e740040") .free_clusters' 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="18e5b85e-8651-4df4-b77c-b6c86e740040") .cluster_size' 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:12.538 952936 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:12.538 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18e5b85e-8651-4df4-b77c-b6c86e740040 lbd_0 20480 00:31:12.796 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4fad6fe8-6c5d-4371-8c2c-d0f3abf2f652 00:31:12.796 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4fad6fe8-6c5d-4371-8c2c-d0f3abf2f652 lvs_n_0 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:13.730 18:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:13.988 { 00:31:13.988 "uuid": "18e5b85e-8651-4df4-b77c-b6c86e740040", 00:31:13.988 "name": "lvs_0", 00:31:13.988 "base_bdev": "Nvme0n1", 00:31:13.988 "total_data_clusters": 238234, 00:31:13.988 "free_clusters": 233114, 00:31:13.988 "block_size": 512, 00:31:13.988 "cluster_size": 4194304 00:31:13.988 }, 00:31:13.988 { 00:31:13.988 "uuid": "b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17", 00:31:13.988 "name": "lvs_n_0", 00:31:13.988 "base_bdev": "4fad6fe8-6c5d-4371-8c2c-d0f3abf2f652", 00:31:13.988 "total_data_clusters": 5114, 00:31:13.988 "free_clusters": 5114, 00:31:13.988 "block_size": 512, 00:31:13.988 "cluster_size": 4194304 00:31:13.988 } 00:31:13.988 ]' 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17") .free_clusters' 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17") .cluster_size' 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:13.988 20456 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:13.988 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6a14ef5-01d1-4acb-ad7f-83e56ffc7a17 lbd_nest_0 20456 00:31:14.246 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=691924d2-811d-456b-a1cf-3c1f936a2954 00:31:14.246 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:14.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 691924d2-811d-456b-a1cf-3c1f936a2954 00:31:14.762 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.020 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:15.020 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:15.020 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:15.020 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:15.020 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.214 Initializing NVMe Controllers 00:31:27.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.214 Initialization complete. Launching workers. 00:31:27.214 ======================================================== 00:31:27.214 Latency(us) 00:31:27.214 Device Information : IOPS MiB/s Average min max 00:31:27.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.78 0.02 20547.09 248.77 45945.65 00:31:27.214 ======================================================== 00:31:27.214 Total : 48.78 0.02 20547.09 248.77 45945.65 00:31:27.214 00:31:27.214 18:38:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:27.214 18:38:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:37.181 Initializing NVMe Controllers 00:31:37.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.181 Initialization complete. Launching workers. 00:31:37.181 ======================================================== 00:31:37.181 Latency(us) 00:31:37.181 Device Information : IOPS MiB/s Average min max 00:31:37.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.40 9.80 12762.25 6380.09 47889.70 00:31:37.181 ======================================================== 00:31:37.181 Total : 78.40 9.80 12762.25 6380.09 47889.70 00:31:37.181 00:31:37.181 18:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:37.181 18:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:37.181 18:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:47.148 Initializing NVMe Controllers 00:31:47.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:47.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:47.148 Initialization complete. Launching workers. 00:31:47.148 ======================================================== 00:31:47.148 Latency(us) 00:31:47.148 Device Information : IOPS MiB/s Average min max 00:31:47.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4815.70 2.35 6644.60 641.65 15057.84 00:31:47.148 ======================================================== 00:31:47.148 Total : 4815.70 2.35 6644.60 641.65 15057.84 00:31:47.148 00:31:47.148 18:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:47.148 18:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:57.157 Initializing NVMe Controllers 00:31:57.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:57.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:57.157 Initialization complete. Launching workers. 00:31:57.157 ======================================================== 00:31:57.157 Latency(us) 00:31:57.157 Device Information : IOPS MiB/s Average min max 00:31:57.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2649.59 331.20 12086.78 1003.82 25111.34 00:31:57.157 ======================================================== 00:31:57.157 Total : 2649.59 331.20 12086.78 1003.82 25111.34 00:31:57.157 00:31:57.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:57.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:57.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.349 Initializing NVMe Controllers 00:32:09.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:09.349 Controller IO queue size 128, less than required. 00:32:09.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:09.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:09.349 Initialization complete. Launching workers. 00:32:09.349 ======================================================== 00:32:09.349 Latency(us) 00:32:09.349 Device Information : IOPS MiB/s Average min max 00:32:09.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8397.91 4.10 15245.53 1829.59 38049.95 00:32:09.349 ======================================================== 00:32:09.349 Total : 8397.91 4.10 15245.53 1829.59 38049.95 00:32:09.349 00:32:09.349 18:39:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:09.349 18:39:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:19.311 Initializing NVMe Controllers 00:32:19.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.311 Controller IO queue size 128, less than required. 00:32:19.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:19.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:19.311 Initialization complete. Launching workers. 00:32:19.311 ======================================================== 00:32:19.311 Latency(us) 00:32:19.311 Device Information : IOPS MiB/s Average min max 00:32:19.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1187.04 148.38 108102.12 23334.76 233491.43 00:32:19.311 ======================================================== 00:32:19.311 Total : 1187.04 148.38 108102.12 23334.76 233491.43 00:32:19.311 00:32:19.311 18:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.311 18:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 691924d2-811d-456b-a1cf-3c1f936a2954 00:32:19.311 18:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:19.569 18:39:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4fad6fe8-6c5d-4371-8c2c-d0f3abf2f652 00:32:20.134 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.391 rmmod nvme_tcp 00:32:20.391 rmmod nvme_fabrics 00:32:20.391 rmmod nvme_keyring 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3073151 ']' 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3073151 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3073151 ']' 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3073151 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3073151 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3073151' 00:32:20.391 killing process with pid 3073151 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3073151 00:32:20.391 18:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3073151 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.919 18:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.815 00:32:24.815 real 1m36.528s 00:32:24.815 user 5m56.692s 00:32:24.815 sys 0m15.833s 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:24.815 ************************************ 00:32:24.815 END TEST nvmf_perf 00:32:24.815 ************************************ 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.815 18:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.073 ************************************ 00:32:25.073 START TEST nvmf_fio_host 00:32:25.073 ************************************ 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:25.073 * Looking for test storage... 00:32:25.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:25.073 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.074 --rc genhtml_branch_coverage=1 00:32:25.074 --rc genhtml_function_coverage=1 00:32:25.074 --rc genhtml_legend=1 00:32:25.074 --rc geninfo_all_blocks=1 00:32:25.074 --rc geninfo_unexecuted_blocks=1 00:32:25.074 00:32:25.074 ' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.074 --rc genhtml_branch_coverage=1 00:32:25.074 --rc genhtml_function_coverage=1 00:32:25.074 --rc genhtml_legend=1 00:32:25.074 --rc geninfo_all_blocks=1 00:32:25.074 --rc geninfo_unexecuted_blocks=1 00:32:25.074 00:32:25.074 ' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.074 --rc genhtml_branch_coverage=1 00:32:25.074 --rc genhtml_function_coverage=1 00:32:25.074 --rc genhtml_legend=1 00:32:25.074 --rc geninfo_all_blocks=1 00:32:25.074 --rc geninfo_unexecuted_blocks=1 00:32:25.074 00:32:25.074 ' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.074 --rc genhtml_branch_coverage=1 00:32:25.074 --rc genhtml_function_coverage=1 00:32:25.074 --rc genhtml_legend=1 00:32:25.074 --rc geninfo_all_blocks=1 00:32:25.074 --rc geninfo_unexecuted_blocks=1 00:32:25.074 00:32:25.074 ' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.074 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.075 18:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.973 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:26.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:26.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:26.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:26.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:26.974 00:32:26.974 --- 10.0.0.2 ping statistics --- 00:32:26.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.974 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:32:26.974 00:32:26.974 --- 10.0.0.1 ping statistics --- 00:32:26.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.974 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.974 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3086337 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3086337 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3086337 ']' 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.975 18:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.233 [2024-11-18 18:39:25.381154] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:27.233 [2024-11-18 18:39:25.381294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.233 [2024-11-18 18:39:25.525737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.491 [2024-11-18 18:39:25.652646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.491 [2024-11-18 18:39:25.652739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.491 [2024-11-18 18:39:25.652762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.491 [2024-11-18 18:39:25.652784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.491 [2024-11-18 18:39:25.652801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.491 [2024-11-18 18:39:25.655368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.491 [2024-11-18 18:39:25.655431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.491 [2024-11-18 18:39:25.655477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.491 [2024-11-18 18:39:25.655499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.057 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.057 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:28.057 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:28.314 [2024-11-18 18:39:26.604977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.314 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:28.314 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.314 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.571 18:39:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:28.829 Malloc1 00:32:28.829 18:39:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.086 18:39:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.344 18:39:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.601 [2024-11-18 18:39:27.893317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.601 18:39:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:29.859 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:30.116 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:30.116 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:30.116 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:30.116 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:30.116 18:39:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:30.116 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:30.116 fio-3.35 00:32:30.116 Starting 1 thread 00:32:32.644 00:32:32.644 test: (groupid=0, jobs=1): err= 0: pid=3086864: Mon Nov 18 18:39:30 2024 00:32:32.644 read: IOPS=6346, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2009msec) 00:32:32.644 slat (usec): min=2, max=276, avg= 3.86, stdev= 3.62 00:32:32.644 clat (usec): min=4094, max=18519, avg=10918.74, stdev=1042.24 00:32:32.644 lat (usec): min=4148, max=18522, avg=10922.60, stdev=1042.20 00:32:32.644 clat percentiles (usec): 00:32:32.644 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:32:32.644 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:32:32.644 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:32:32.644 | 99.00th=[13829], 99.50th=[14746], 99.90th=[16712], 99.95th=[17695], 00:32:32.644 | 99.99th=[18482] 00:32:32.644 bw ( KiB/s): min=24384, max=25928, per=99.95%, avg=25374.00, stdev=683.25, samples=4 00:32:32.644 iops : min= 6096, max= 6482, avg=6343.50, stdev=170.81, samples=4 00:32:32.644 write: IOPS=6342, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2009msec); 0 zone resets 00:32:32.644 slat (usec): min=3, max=235, avg= 3.98, stdev= 2.71 00:32:32.644 clat (usec): min=2871, max=18220, avg=9117.94, stdev=927.11 00:32:32.644 lat (usec): min=2897, max=18224, avg=9121.92, stdev=927.29 00:32:32.644 clat percentiles (usec): 00:32:32.644 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 00:32:32.644 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:32:32.644 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10421], 00:32:32.644 | 99.00th=[12125], 99.50th=[13173], 99.90th=[17433], 99.95th=[17695], 00:32:32.644 | 99.99th=[17957] 00:32:32.644 bw ( KiB/s): min=25296, max=25440, per=99.95%, avg=25358.00, stdev=73.43, samples=4 00:32:32.644 iops : min= 6324, max= 6360, avg=6339.50, stdev=18.36, samples=4 00:32:32.644 lat (msec) : 4=0.04%, 10=52.79%, 20=47.17% 00:32:32.644 cpu : usr=67.43%, sys=30.98%, ctx=48, majf=0, minf=1546 00:32:32.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:32.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:32.644 issued rwts: total=12751,12742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:32.644 00:32:32.644 Run status group 0 (all jobs): 00:32:32.644 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2009-2009msec 00:32:32.644 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2009-2009msec 00:32:32.902 ----------------------------------------------------- 00:32:32.902 Suppressions used: 00:32:32.902 count bytes template 00:32:32.902 1 57 /usr/src/fio/parse.c 00:32:32.902 1 8 libtcmalloc_minimal.so 00:32:32.902 ----------------------------------------------------- 00:32:32.902 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.902 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:32.903 18:39:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:33.160 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:33.160 fio-3.35 00:32:33.160 Starting 1 thread 00:32:35.690 00:32:35.690 test: (groupid=0, jobs=1): err= 0: pid=3087206: Mon Nov 18 18:39:33 2024 00:32:35.690 read: IOPS=6129, BW=95.8MiB/s (100MB/s)(192MiB/2008msec) 00:32:35.690 slat (usec): min=3, max=104, avg= 5.11, stdev= 1.99 00:32:35.690 clat (usec): min=2704, max=22623, avg=12055.78, stdev=2825.04 00:32:35.690 lat (usec): min=2709, max=22627, avg=12060.89, stdev=2825.07 00:32:35.690 clat percentiles (usec): 00:32:35.690 | 1.00th=[ 6390], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9896], 00:32:35.690 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11863], 60.00th=[12256], 00:32:35.690 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16057], 95.00th=[17433], 00:32:35.690 | 99.00th=[20055], 99.50th=[21365], 99.90th=[22152], 99.95th=[22414], 00:32:35.690 | 99.99th=[22676] 00:32:35.690 bw ( KiB/s): min=41824, max=54400, per=49.33%, avg=48376.00, stdev=6969.26, samples=4 00:32:35.690 iops : min= 2614, max= 3400, avg=3023.50, stdev=435.58, samples=4 00:32:35.690 write: IOPS=3552, BW=55.5MiB/s (58.2MB/s)(99.2MiB/1788msec); 0 zone resets 00:32:35.690 slat (usec): min=33, max=151, avg=36.53, stdev= 5.65 00:32:35.690 clat (usec): min=8031, max=25932, avg=15706.77, stdev=2774.54 00:32:35.690 lat (usec): min=8065, max=25973, avg=15743.30, stdev=2774.54 00:32:35.690 clat percentiles (usec): 00:32:35.690 | 1.00th=[10028], 5.00th=[11731], 10.00th=[12387], 20.00th=[13304], 00:32:35.690 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15401], 60.00th=[16319], 00:32:35.690 | 70.00th=[17171], 80.00th=[17957], 90.00th=[19530], 95.00th=[20579], 00:32:35.690 | 99.00th=[22676], 99.50th=[23725], 99.90th=[25560], 99.95th=[25822], 00:32:35.690 | 99.99th=[25822] 00:32:35.690 bw ( KiB/s): min=43936, max=56320, per=88.36%, avg=50216.00, stdev=6628.11, samples=4 00:32:35.690 iops : min= 2746, max= 3520, avg=3138.50, stdev=414.26, samples=4 00:32:35.690 lat (msec) : 4=0.12%, 10=14.63%, 20=82.09%, 50=3.16% 00:32:35.690 cpu : usr=76.84%, sys=22.01%, ctx=44, majf=0, minf=2112 00:32:35.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:35.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:35.691 issued rwts: total=12308,6351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:35.691 00:32:35.691 Run status group 0 (all jobs): 00:32:35.691 READ: bw=95.8MiB/s (100MB/s), 95.8MiB/s-95.8MiB/s (100MB/s-100MB/s), io=192MiB (202MB), run=2008-2008msec 00:32:35.691 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=99.2MiB (104MB), run=1788-1788msec 00:32:35.948 ----------------------------------------------------- 00:32:35.948 Suppressions used: 00:32:35.948 count bytes template 00:32:35.948 1 57 /usr/src/fio/parse.c 00:32:35.948 143 13728 /usr/src/fio/iolog.c 00:32:35.948 1 8 libtcmalloc_minimal.so 00:32:35.948 ----------------------------------------------------- 00:32:35.948 00:32:35.948 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:36.206 18:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:39.486 Nvme0n1 00:32:39.486 18:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2d7068a0-822f-4892-ae44-fe9e861cdf33 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2d7068a0-822f-4892-ae44-fe9e861cdf33 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2d7068a0-822f-4892-ae44-fe9e861cdf33 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:42.766 { 00:32:42.766 "uuid": "2d7068a0-822f-4892-ae44-fe9e861cdf33", 00:32:42.766 "name": "lvs_0", 00:32:42.766 "base_bdev": "Nvme0n1", 00:32:42.766 "total_data_clusters": 930, 00:32:42.766 "free_clusters": 930, 00:32:42.766 "block_size": 512, 00:32:42.766 "cluster_size": 1073741824 00:32:42.766 } 00:32:42.766 ]' 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2d7068a0-822f-4892-ae44-fe9e861cdf33") .free_clusters' 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2d7068a0-822f-4892-ae44-fe9e861cdf33") .cluster_size' 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:42.766 952320 00:32:42.766 18:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:43.023 af813418-e3ec-45e5-bb37-1babe4d1cc0a 00:32:43.023 18:39:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:43.282 18:39:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:43.540 18:39:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:43.798 18:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:44.056 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:44.056 fio-3.35 00:32:44.056 Starting 1 thread 00:32:46.585 00:32:46.585 test: (groupid=0, jobs=1): err= 0: pid=3088603: Mon Nov 18 18:39:44 2024 00:32:46.585 read: IOPS=4469, BW=17.5MiB/s (18.3MB/s)(35.1MiB/2009msec) 00:32:46.585 slat (usec): min=3, max=246, avg= 3.80, stdev= 3.75 00:32:46.585 clat (usec): min=1484, max=172784, avg=15501.67, stdev=13090.50 00:32:46.585 lat (usec): min=1490, max=172851, avg=15505.47, stdev=13091.17 00:32:46.585 clat percentiles (msec): 00:32:46.585 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:46.585 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:46.585 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:46.585 | 99.00th=[ 22], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:46.585 | 99.99th=[ 174] 00:32:46.585 bw ( KiB/s): min=12784, max=19712, per=99.50%, avg=17788.00, stdev=3340.91, samples=4 00:32:46.585 iops : min= 3196, max= 4928, avg=4447.00, stdev=835.23, samples=4 00:32:46.585 write: IOPS=4459, BW=17.4MiB/s (18.3MB/s)(35.0MiB/2009msec); 0 zone resets 00:32:46.585 slat (usec): min=3, max=191, avg= 3.86, stdev= 2.47 00:32:46.585 clat (usec): min=438, max=169989, avg=12951.95, stdev=12329.82 00:32:46.585 lat (usec): min=443, max=170000, avg=12955.81, stdev=12330.52 00:32:46.585 clat percentiles (msec): 00:32:46.585 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:46.585 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:32:46.585 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:46.585 | 99.00th=[ 15], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 169], 00:32:46.585 | 99.99th=[ 171] 00:32:46.585 bw ( KiB/s): min=13352, max=19576, per=99.98%, avg=17834.00, stdev=2993.66, samples=4 00:32:46.585 iops : min= 3338, max= 4894, avg=4458.50, stdev=748.41, samples=4 00:32:46.585 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:46.585 lat (msec) : 2=0.02%, 4=0.07%, 10=1.82%, 20=97.16%, 50=0.19% 00:32:46.585 lat (msec) : 250=0.71% 00:32:46.585 cpu : usr=67.03%, sys=31.57%, ctx=77, majf=0, minf=1545 00:32:46.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:46.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.585 issued rwts: total=8979,8959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.585 00:32:46.585 Run status group 0 (all jobs): 00:32:46.585 READ: bw=17.5MiB/s (18.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=35.1MiB (36.8MB), run=2009-2009msec 00:32:46.585 WRITE: bw=17.4MiB/s (18.3MB/s), 17.4MiB/s-17.4MiB/s (18.3MB/s-18.3MB/s), io=35.0MiB (36.7MB), run=2009-2009msec 00:32:46.842 ----------------------------------------------------- 00:32:46.842 Suppressions used: 00:32:46.842 count bytes template 00:32:46.842 1 58 /usr/src/fio/parse.c 00:32:46.842 1 8 libtcmalloc_minimal.so 00:32:46.842 ----------------------------------------------------- 00:32:46.842 00:32:46.842 18:39:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:47.101 18:39:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=748f5699-7456-46dd-bd44-9a6a2290b7b4 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 748f5699-7456-46dd-bd44-9a6a2290b7b4 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=748f5699-7456-46dd-bd44-9a6a2290b7b4 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:48.500 { 00:32:48.500 "uuid": "2d7068a0-822f-4892-ae44-fe9e861cdf33", 00:32:48.500 "name": "lvs_0", 00:32:48.500 "base_bdev": "Nvme0n1", 00:32:48.500 "total_data_clusters": 930, 00:32:48.500 "free_clusters": 0, 00:32:48.500 "block_size": 512, 00:32:48.500 "cluster_size": 1073741824 00:32:48.500 }, 00:32:48.500 { 00:32:48.500 "uuid": "748f5699-7456-46dd-bd44-9a6a2290b7b4", 00:32:48.500 "name": "lvs_n_0", 00:32:48.500 "base_bdev": "af813418-e3ec-45e5-bb37-1babe4d1cc0a", 00:32:48.500 "total_data_clusters": 237847, 00:32:48.500 "free_clusters": 237847, 00:32:48.500 "block_size": 512, 00:32:48.500 "cluster_size": 4194304 00:32:48.500 } 00:32:48.500 ]' 00:32:48.500 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="748f5699-7456-46dd-bd44-9a6a2290b7b4") .free_clusters' 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="748f5699-7456-46dd-bd44-9a6a2290b7b4") .cluster_size' 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:48.783 951388 00:32:48.783 18:39:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:49.716 c8a9d343-f97f-427e-a43a-5291f1d15a79 00:32:49.716 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:49.974 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:50.233 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:50.490 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:50.748 18:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:51.006 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:51.006 fio-3.35 00:32:51.006 Starting 1 thread 00:32:53.533 00:32:53.533 test: (groupid=0, jobs=1): err= 0: pid=3089457: Mon Nov 18 18:39:51 2024 00:32:53.533 read: IOPS=4369, BW=17.1MiB/s (17.9MB/s)(34.3MiB/2011msec) 00:32:53.533 slat (usec): min=2, max=200, avg= 3.77, stdev= 3.25 00:32:53.533 clat (usec): min=6379, max=25082, avg=15851.85, stdev=1555.76 00:32:53.533 lat (usec): min=6387, max=25086, avg=15855.63, stdev=1555.61 00:32:53.533 clat percentiles (usec): 00:32:53.533 | 1.00th=[12256], 5.00th=[13435], 10.00th=[13960], 20.00th=[14615], 00:32:53.533 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:32:53.533 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:53.533 | 99.00th=[19530], 99.50th=[19792], 99.90th=[23200], 99.95th=[24773], 00:32:53.533 | 99.99th=[25035] 00:32:53.533 bw ( KiB/s): min=16080, max=18136, per=99.76%, avg=17436.00, stdev=923.40, samples=4 00:32:53.533 iops : min= 4020, max= 4534, avg=4359.00, stdev=230.85, samples=4 00:32:53.533 write: IOPS=4366, BW=17.1MiB/s (17.9MB/s)(34.3MiB/2011msec); 0 zone resets 00:32:53.533 slat (usec): min=3, max=172, avg= 3.86, stdev= 2.37 00:32:53.533 clat (usec): min=3052, max=23170, avg=13154.66, stdev=1287.27 00:32:53.533 lat (usec): min=3063, max=23173, avg=13158.51, stdev=1287.21 00:32:53.533 clat percentiles (usec): 00:32:53.533 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:32:53.533 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:32:53.533 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:32:53.533 | 99.00th=[16057], 99.50th=[16450], 99.90th=[22676], 99.95th=[22938], 00:32:53.533 | 99.99th=[23200] 00:32:53.533 bw ( KiB/s): min=17112, max=17608, per=99.89%, avg=17448.00, stdev=232.51, samples=4 00:32:53.533 iops : min= 4278, max= 4402, avg=4362.00, stdev=58.13, samples=4 00:32:53.533 lat (msec) : 4=0.02%, 10=0.42%, 20=99.27%, 50=0.30% 00:32:53.533 cpu : usr=69.45%, sys=29.25%, ctx=79, majf=0, minf=1543 00:32:53.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:53.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:53.533 issued rwts: total=8787,8782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:53.533 00:32:53.533 Run status group 0 (all jobs): 00:32:53.533 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (36.0MB), run=2011-2011msec 00:32:53.533 WRITE: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (36.0MB), run=2011-2011msec 00:32:53.533 ----------------------------------------------------- 00:32:53.533 Suppressions used: 00:32:53.533 count bytes template 00:32:53.533 1 58 /usr/src/fio/parse.c 00:32:53.533 1 8 libtcmalloc_minimal.so 00:32:53.533 ----------------------------------------------------- 00:32:53.533 00:32:53.533 18:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:53.791 18:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:54.048 18:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:58.230 18:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:58.487 18:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:01.767 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:01.767 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:03.664 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:03.664 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.665 18:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.665 rmmod nvme_tcp 00:33:03.665 rmmod nvme_fabrics 00:33:03.665 rmmod nvme_keyring 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3086337 ']' 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3086337 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3086337 ']' 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3086337 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3086337 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3086337' 00:33:03.923 killing process with pid 3086337 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3086337 00:33:03.923 18:40:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3086337 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.298 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.199 00:33:07.199 real 0m42.186s 00:33:07.199 user 2m42.068s 00:33:07.199 sys 0m8.248s 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.199 ************************************ 00:33:07.199 END TEST nvmf_fio_host 00:33:07.199 ************************************ 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.199 ************************************ 00:33:07.199 START TEST nvmf_failover 00:33:07.199 ************************************ 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:07.199 * Looking for test storage... 00:33:07.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.199 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.458 --rc genhtml_branch_coverage=1 00:33:07.458 --rc genhtml_function_coverage=1 00:33:07.458 --rc genhtml_legend=1 00:33:07.458 --rc geninfo_all_blocks=1 00:33:07.458 --rc geninfo_unexecuted_blocks=1 00:33:07.458 00:33:07.458 ' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.458 --rc genhtml_branch_coverage=1 00:33:07.458 --rc genhtml_function_coverage=1 00:33:07.458 --rc genhtml_legend=1 00:33:07.458 --rc geninfo_all_blocks=1 00:33:07.458 --rc geninfo_unexecuted_blocks=1 00:33:07.458 00:33:07.458 ' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.458 --rc genhtml_branch_coverage=1 00:33:07.458 --rc genhtml_function_coverage=1 00:33:07.458 --rc genhtml_legend=1 00:33:07.458 --rc geninfo_all_blocks=1 00:33:07.458 --rc geninfo_unexecuted_blocks=1 00:33:07.458 00:33:07.458 ' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.458 --rc genhtml_branch_coverage=1 00:33:07.458 --rc genhtml_function_coverage=1 00:33:07.458 --rc genhtml_legend=1 00:33:07.458 --rc geninfo_all_blocks=1 00:33:07.458 --rc geninfo_unexecuted_blocks=1 00:33:07.458 00:33:07.458 ' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.458 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:07.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.459 18:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.359 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.359 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:09.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:09.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:09.360 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:09.360 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:33:09.360 00:33:09.360 --- 10.0.0.2 ping statistics --- 00:33:09.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.360 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:33:09.360 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:33:09.619 00:33:09.619 --- 10.0.0.1 ping statistics --- 00:33:09.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.619 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3092966 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3092966 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3092966 ']' 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.619 18:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.619 [2024-11-18 18:40:07.824183] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:09.619 [2024-11-18 18:40:07.824342] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.877 [2024-11-18 18:40:07.971735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:09.877 [2024-11-18 18:40:08.108121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.877 [2024-11-18 18:40:08.108196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.877 [2024-11-18 18:40:08.108222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.877 [2024-11-18 18:40:08.108246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.877 [2024-11-18 18:40:08.108266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.877 [2024-11-18 18:40:08.111338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.877 [2024-11-18 18:40:08.111393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.877 [2024-11-18 18:40:08.111396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:11.068 [2024-11-18 18:40:09.154842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.068 18:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:11.325 Malloc0 00:33:11.325 18:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:11.583 18:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.148 18:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.148 [2024-11-18 18:40:10.457105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.148 18:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:12.407 [2024-11-18 18:40:10.733880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:12.664 18:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:12.923 [2024-11-18 18:40:11.010871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3093392 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3093392 /var/tmp/bdevperf.sock 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3093392 ']' 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:12.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.923 18:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:13.857 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.857 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:13.857 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.423 NVMe0n1 00:33:14.423 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.681 00:33:14.681 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3093654 00:33:14.681 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:14.681 18:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:15.615 18:40:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.873 [2024-11-18 18:40:14.178730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.178991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 [2024-11-18 18:40:14.179298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:15.873 18:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:19.153 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:19.411 00:33:19.411 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:19.669 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:22.950 18:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.950 [2024-11-18 18:40:21.285236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.208 18:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:24.142 18:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:24.400 18:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3093654 00:33:30.981 { 00:33:30.981 "results": [ 00:33:30.981 { 00:33:30.981 "job": "NVMe0n1", 00:33:30.981 "core_mask": "0x1", 00:33:30.981 "workload": "verify", 00:33:30.981 "status": "finished", 00:33:30.981 "verify_range": { 00:33:30.981 "start": 0, 00:33:30.981 "length": 16384 00:33:30.981 }, 00:33:30.981 "queue_depth": 128, 00:33:30.981 "io_size": 4096, 00:33:30.981 "runtime": 15.056521, 00:33:30.981 "iops": 6137.34075753622, 00:33:30.981 "mibps": 23.97398733412586, 00:33:30.981 "io_failed": 9373, 00:33:30.981 "io_timeout": 0, 00:33:30.981 "avg_latency_us": 18855.527407582078, 00:33:30.981 "min_latency_us": 1110.4711111111112, 00:33:30.981 "max_latency_us": 50486.99259259259 00:33:30.981 } 00:33:30.981 ], 00:33:30.981 "core_count": 1 00:33:30.981 } 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3093392 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3093392 ']' 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3093392 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.981 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093392 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093392' 00:33:30.982 killing process with pid 3093392 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3093392 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3093392 00:33:30.982 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:30.982 [2024-11-18 18:40:11.115083] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:30.982 [2024-11-18 18:40:11.115239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093392 ] 00:33:30.982 [2024-11-18 18:40:11.252006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.982 [2024-11-18 18:40:11.377628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.982 Running I/O for 15 seconds... 00:33:30.982 6204.00 IOPS, 24.23 MiB/s [2024-11-18T17:40:29.319Z] [2024-11-18 18:40:14.180435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.982 [2024-11-18 18:40:14.180521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.982 [2024-11-18 18:40:14.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.982 [2024-11-18 18:40:14.180628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.982 [2024-11-18 18:40:14.180673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2000 is same with the state(6) to be set 00:33:30.982 [2024-11-18 18:40:14.180814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.180847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.180928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.180954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.180976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.181977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.181999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.982 [2024-11-18 18:40:14.182046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.982 [2024-11-18 18:40:14.182094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.982 [2024-11-18 18:40:14.182140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.982 [2024-11-18 18:40:14.182186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.982 [2024-11-18 18:40:14.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.982 [2024-11-18 18:40:14.182257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.982 [2024-11-18 18:40:14.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.983 [2024-11-18 18:40:14.182324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.983 [2024-11-18 18:40:14.182374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.182965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.183973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.183997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.184043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.184090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.983 [2024-11-18 18:40:14.184112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.983 [2024-11-18 18:40:14.184137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.184965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.184987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.984 [2024-11-18 18:40:14.185795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.185958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.185985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.984 [2024-11-18 18:40:14.186010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.984 [2024-11-18 18:40:14.186032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:14.186893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.186939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:30.985 [2024-11-18 18:40:14.186963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:30.985 [2024-11-18 18:40:14.186983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57912 len:8 PRP1 0x0 PRP2 0x0 00:33:30.985 [2024-11-18 18:40:14.187012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:14.187308] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:30.985 [2024-11-18 18:40:14.187342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:30.985 [2024-11-18 18:40:14.191226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:30.985 [2024-11-18 18:40:14.191303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:30.985 [2024-11-18 18:40:14.269069] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:30.985 5870.50 IOPS, 22.93 MiB/s [2024-11-18T17:40:29.322Z] 5986.00 IOPS, 23.38 MiB/s [2024-11-18T17:40:29.322Z] 6016.75 IOPS, 23.50 MiB/s [2024-11-18T17:40:29.322Z] [2024-11-18 18:40:17.948775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.985 [2024-11-18 18:40:17.948860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.948913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.985 [2024-11-18 18:40:17.948947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.948971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.985 [2024-11-18 18:40:17.948992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.985 [2024-11-18 18:40:17.949034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2000 is same with the state(6) to be set 00:33:30.985 [2024-11-18 18:40:17.949157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.985 [2024-11-18 18:40:17.949186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.985 [2024-11-18 18:40:17.949753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.985 [2024-11-18 18:40:17.949777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.949799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.949823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.949868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.949889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.949933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.949955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.949978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.949999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.986 [2024-11-18 18:40:17.950739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.950785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.950834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.950881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.950951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.950975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.950995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.986 [2024-11-18 18:40:17.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-11-18 18:40:17.951305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.951963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.952968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.952990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.953013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.953050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.953075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.953097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.953121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.953143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.953167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.953189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.987 [2024-11-18 18:40:17.953212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-11-18 18:40:17.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.988 [2024-11-18 18:40:17.953286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.988 [2024-11-18 18:40:17.953332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.988 [2024-11-18 18:40:17.953378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.988 [2024-11-18 18:40:17.953762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.953974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.953996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.954974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.954998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.955019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.955048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.988 [2024-11-18 18:40:17.955071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.988 [2024-11-18 18:40:17.955095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:17.955128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:17.955154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:17.955177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:17.955201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:17.955223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:17.955264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:30.989 [2024-11-18 18:40:17.955287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:30.989 [2024-11-18 18:40:17.955306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128616 len:8 PRP1 0x0 PRP2 0x0 00:33:30.989 [2024-11-18 18:40:17.955326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:17.955603] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:30.989 [2024-11-18 18:40:17.955645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:30.989 [2024-11-18 18:40:17.959499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:30.989 [2024-11-18 18:40:17.959574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:30.989 [2024-11-18 18:40:17.992907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:30.989 5978.00 IOPS, 23.35 MiB/s [2024-11-18T17:40:29.326Z] 6054.67 IOPS, 23.65 MiB/s [2024-11-18T17:40:29.326Z] 6109.71 IOPS, 23.87 MiB/s [2024-11-18T17:40:29.326Z] 6133.62 IOPS, 23.96 MiB/s [2024-11-18T17:40:29.326Z] 6165.67 IOPS, 24.08 MiB/s [2024-11-18T17:40:29.326Z] [2024-11-18 18:40:22.556633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.556741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.556790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.556842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.556865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.556891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.556914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.556968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.556991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.989 [2024-11-18 18:40:22.557932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.557956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.557978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.989 [2024-11-18 18:40:22.558285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-11-18 18:40:22.558307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.558968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.558993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.990 [2024-11-18 18:40:22.559439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.990 [2024-11-18 18:40:22.559964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-11-18 18:40:22.559995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.991 [2024-11-18 18:40:22.560234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.560968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.560990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.991 [2024-11-18 18:40:22.561826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.991 [2024-11-18 18:40:22.561848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.561872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.561895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.561919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.561941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.561965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.561987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.992 [2024-11-18 18:40:22.562853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.562876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:33:30.992 [2024-11-18 18:40:22.562906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:30.992 [2024-11-18 18:40:22.562925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:30.992 [2024-11-18 18:40:22.562944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109400 len:8 PRP1 0x0 PRP2 0x0 00:33:30.992 [2024-11-18 18:40:22.562965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.563257] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:30.992 [2024-11-18 18:40:22.563325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.992 [2024-11-18 18:40:22.563353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.563378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.992 [2024-11-18 18:40:22.563399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.563420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.992 [2024-11-18 18:40:22.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.563461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.992 [2024-11-18 18:40:22.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.992 [2024-11-18 18:40:22.563503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:30.992 [2024-11-18 18:40:22.567375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:30.992 [2024-11-18 18:40:22.567453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:30.992 [2024-11-18 18:40:22.722868] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:30.992 6070.50 IOPS, 23.71 MiB/s [2024-11-18T17:40:29.329Z] 6093.09 IOPS, 23.80 MiB/s [2024-11-18T17:40:29.329Z] 6119.50 IOPS, 23.90 MiB/s [2024-11-18T17:40:29.329Z] 6134.31 IOPS, 23.96 MiB/s [2024-11-18T17:40:29.329Z] 6147.43 IOPS, 24.01 MiB/s [2024-11-18T17:40:29.329Z] 6151.93 IOPS, 24.03 MiB/s 00:33:30.992 Latency(us) 00:33:30.992 [2024-11-18T17:40:29.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.992 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:30.992 Verification LBA range: start 0x0 length 0x4000 00:33:30.992 NVMe0n1 : 15.06 6137.34 23.97 622.52 0.00 18855.53 1110.47 50486.99 00:33:30.992 [2024-11-18T17:40:29.329Z] =================================================================================================================== 00:33:30.992 [2024-11-18T17:40:29.329Z] Total : 6137.34 23.97 622.52 0.00 18855.53 1110.47 50486.99 00:33:30.992 Received shutdown signal, test time was about 15.000000 seconds 00:33:30.992 00:33:30.992 Latency(us) 00:33:30.992 [2024-11-18T17:40:29.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.992 [2024-11-18T17:40:29.329Z] =================================================================================================================== 00:33:30.992 [2024-11-18T17:40:29.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3095494 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3095494 /var/tmp/bdevperf.sock 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3095494 ']' 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.992 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:30.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:30.993 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.993 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:31.926 18:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.926 18:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:31.926 18:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:32.184 [2024-11-18 18:40:30.319906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:32.184 18:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:32.442 [2024-11-18 18:40:30.645111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:32.443 18:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:33.009 NVMe0n1 00:33:33.009 18:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:33.267 00:33:33.525 18:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:33.783 00:33:33.783 18:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:33.783 18:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:34.041 18:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:34.299 18:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:37.577 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:37.578 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:37.578 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3096292 00:33:37.578 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:37.578 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3096292 00:33:38.952 { 00:33:38.952 "results": [ 00:33:38.952 { 00:33:38.952 "job": "NVMe0n1", 00:33:38.952 "core_mask": "0x1", 00:33:38.952 "workload": "verify", 00:33:38.952 "status": "finished", 00:33:38.952 "verify_range": { 00:33:38.952 "start": 0, 00:33:38.952 "length": 16384 00:33:38.952 }, 00:33:38.952 "queue_depth": 128, 00:33:38.952 "io_size": 4096, 00:33:38.952 "runtime": 1.015062, 00:33:38.952 "iops": 6191.740011940157, 00:33:38.952 "mibps": 24.18648442164124, 00:33:38.952 "io_failed": 0, 00:33:38.952 "io_timeout": 0, 00:33:38.952 "avg_latency_us": 20532.894748814047, 00:33:38.952 "min_latency_us": 4296.248888888889, 00:33:38.952 "max_latency_us": 20097.706666666665 00:33:38.952 } 00:33:38.952 ], 00:33:38.952 "core_count": 1 00:33:38.952 } 00:33:38.952 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:38.952 [2024-11-18 18:40:29.076966] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:38.952 [2024-11-18 18:40:29.077099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095494 ] 00:33:38.952 [2024-11-18 18:40:29.213174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.952 [2024-11-18 18:40:29.338523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.952 [2024-11-18 18:40:32.561293] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:38.952 [2024-11-18 18:40:32.561453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.952 [2024-11-18 18:40:32.561495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.952 [2024-11-18 18:40:32.561528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.952 [2024-11-18 18:40:32.561550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.952 [2024-11-18 18:40:32.561572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.952 [2024-11-18 18:40:32.561594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.952 [2024-11-18 18:40:32.561624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.952 [2024-11-18 18:40:32.561647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.952 [2024-11-18 18:40:32.561668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:38.952 [2024-11-18 18:40:32.561756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:38.952 [2024-11-18 18:40:32.561815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:38.952 [2024-11-18 18:40:32.614605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:38.952 Running I/O for 1 seconds... 00:33:38.952 6157.00 IOPS, 24.05 MiB/s 00:33:38.952 Latency(us) 00:33:38.952 [2024-11-18T17:40:37.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.952 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:38.952 Verification LBA range: start 0x0 length 0x4000 00:33:38.952 NVMe0n1 : 1.02 6191.74 24.19 0.00 0.00 20532.89 4296.25 20097.71 00:33:38.952 [2024-11-18T17:40:37.289Z] =================================================================================================================== 00:33:38.952 [2024-11-18T17:40:37.289Z] Total : 6191.74 24.19 0.00 0.00 20532.89 4296.25 20097.71 00:33:38.952 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:38.952 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:38.952 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:39.210 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:39.210 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:39.776 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:40.033 18:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3095494 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3095494 ']' 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3095494 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3095494 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3095494' 00:33:43.311 killing process with pid 3095494 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3095494 00:33:43.311 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3095494 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:44.248 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:44.248 rmmod nvme_tcp 00:33:44.248 rmmod nvme_fabrics 00:33:44.507 rmmod nvme_keyring 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3092966 ']' 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3092966 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3092966 ']' 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3092966 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3092966 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3092966' 00:33:44.507 killing process with pid 3092966 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3092966 00:33:44.507 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3092966 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.882 18:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:47.784 00:33:47.784 real 0m40.561s 00:33:47.784 user 2m23.073s 00:33:47.784 sys 0m6.310s 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:47.784 ************************************ 00:33:47.784 END TEST nvmf_failover 00:33:47.784 ************************************ 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.784 18:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.784 ************************************ 00:33:47.784 START TEST nvmf_host_discovery 00:33:47.784 ************************************ 00:33:47.784 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:47.784 * Looking for test storage... 00:33:47.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.784 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:47.784 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:47.784 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.044 --rc genhtml_branch_coverage=1 00:33:48.044 --rc genhtml_function_coverage=1 00:33:48.044 --rc genhtml_legend=1 00:33:48.044 --rc geninfo_all_blocks=1 00:33:48.044 --rc geninfo_unexecuted_blocks=1 00:33:48.044 00:33:48.044 ' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.044 --rc genhtml_branch_coverage=1 00:33:48.044 --rc genhtml_function_coverage=1 00:33:48.044 --rc genhtml_legend=1 00:33:48.044 --rc geninfo_all_blocks=1 00:33:48.044 --rc geninfo_unexecuted_blocks=1 00:33:48.044 00:33:48.044 ' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.044 --rc genhtml_branch_coverage=1 00:33:48.044 --rc genhtml_function_coverage=1 00:33:48.044 --rc genhtml_legend=1 00:33:48.044 --rc geninfo_all_blocks=1 00:33:48.044 --rc geninfo_unexecuted_blocks=1 00:33:48.044 00:33:48.044 ' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.044 --rc genhtml_branch_coverage=1 00:33:48.044 --rc genhtml_function_coverage=1 00:33:48.044 --rc genhtml_legend=1 00:33:48.044 --rc geninfo_all_blocks=1 00:33:48.044 --rc geninfo_unexecuted_blocks=1 00:33:48.044 00:33:48.044 ' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.044 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:48.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.045 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.946 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:49.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:49.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:49.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:49.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.947 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:33:50.206 00:33:50.206 --- 10.0.0.2 ping statistics --- 00:33:50.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.206 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:33:50.206 00:33:50.206 --- 10.0.0.1 ping statistics --- 00:33:50.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.206 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3099161 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3099161 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3099161 ']' 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.206 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.206 [2024-11-18 18:40:48.453667] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:50.206 [2024-11-18 18:40:48.453842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.465 [2024-11-18 18:40:48.606685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.465 [2024-11-18 18:40:48.743684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.465 [2024-11-18 18:40:48.743774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.465 [2024-11-18 18:40:48.743800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.465 [2024-11-18 18:40:48.743826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.465 [2024-11-18 18:40:48.743846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.465 [2024-11-18 18:40:48.745509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 [2024-11-18 18:40:49.426600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 [2024-11-18 18:40:49.434824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 null0 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 null1 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3099312 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3099312 /tmp/host.sock 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3099312 ']' 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:51.402 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.402 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.402 [2024-11-18 18:40:49.562856] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:51.403 [2024-11-18 18:40:49.563045] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099312 ] 00:33:51.403 [2024-11-18 18:40:49.713098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.661 [2024-11-18 18:40:49.835832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.227 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.485 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.486 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 [2024-11-18 18:40:50.830817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.744 18:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.744 18:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:52.744 18:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:53.310 [2024-11-18 18:40:51.618560] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:53.310 [2024-11-18 18:40:51.618617] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:53.310 [2024-11-18 18:40:51.618680] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:53.567 [2024-11-18 18:40:51.745150] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:53.567 [2024-11-18 18:40:51.886527] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:53.567 [2024-11-18 18:40:51.888264] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:53.567 [2024-11-18 18:40:51.890831] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:53.567 [2024-11-18 18:40:51.890865] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:53.567 [2024-11-18 18:40:51.897205] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.825 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.083 [2024-11-18 18:40:52.201198] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:54.083 [2024-11-18 18:40:52.207688] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.083 [2024-11-18 18:40:52.292858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:54.083 [2024-11-18 18:40:52.293264] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:54.083 [2024-11-18 18:40:52.293318] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.083 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:54.084 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.341 [2024-11-18 18:40:52.420346] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:54.341 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:54.341 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:54.597 [2024-11-18 18:40:52.724510] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:54.597 [2024-11-18 18:40:52.724680] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:54.597 [2024-11-18 18:40:52.724725] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:54.597 [2024-11-18 18:40:52.724751] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.163 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.422 [2024-11-18 18:40:53.523111] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:55.422 [2024-11-18 18:40:53.523178] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.422 [2024-11-18 18:40:53.531556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.422 [2024-11-18 18:40:53.531614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.422 [2024-11-18 18:40:53.531665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.422 [2024-11-18 18:40:53.531688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.422 [2024-11-18 18:40:53.531710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.422 [2024-11-18 18:40:53.531730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.422 [2024-11-18 18:40:53.531751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.422 [2024-11-18 18:40:53.531772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.422 [2024-11-18 18:40:53.531793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.422 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.422 [2024-11-18 18:40:53.541540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.422 [2024-11-18 18:40:53.551589] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.422 [2024-11-18 18:40:53.551642] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.422 [2024-11-18 18:40:53.551679] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.422 [2024-11-18 18:40:53.551696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.422 [2024-11-18 18:40:53.551782] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.422 [2024-11-18 18:40:53.552022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.422 [2024-11-18 18:40:53.552065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.422 [2024-11-18 18:40:53.552102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.422 [2024-11-18 18:40:53.552143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.422 [2024-11-18 18:40:53.552181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.422 [2024-11-18 18:40:53.552211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.422 [2024-11-18 18:40:53.552246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.422 [2024-11-18 18:40:53.552270] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.422 [2024-11-18 18:40:53.552289] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.422 [2024-11-18 18:40:53.552305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.422 [2024-11-18 18:40:53.561816] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.422 [2024-11-18 18:40:53.561847] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.422 [2024-11-18 18:40:53.561862] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.561873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.423 [2024-11-18 18:40:53.561922] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.562122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.423 [2024-11-18 18:40:53.562162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.423 [2024-11-18 18:40:53.562189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.423 [2024-11-18 18:40:53.562226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.423 [2024-11-18 18:40:53.562261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.423 [2024-11-18 18:40:53.562285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.423 [2024-11-18 18:40:53.562307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.423 [2024-11-18 18:40:53.562327] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.423 [2024-11-18 18:40:53.562345] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.423 [2024-11-18 18:40:53.562359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:55.423 [2024-11-18 18:40:53.571966] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.423 [2024-11-18 18:40:53.572007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.423 [2024-11-18 18:40:53.572031] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.572046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.423 [2024-11-18 18:40:53.572087] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.572254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.423 [2024-11-18 18:40:53.572296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.423 [2024-11-18 18:40:53.572323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.423 [2024-11-18 18:40:53.572361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.423 [2024-11-18 18:40:53.572396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.423 [2024-11-18 18:40:53.572419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.423 [2024-11-18 18:40:53.572441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.423 [2024-11-18 18:40:53.572462] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.423 [2024-11-18 18:40:53.572479] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.423 [2024-11-18 18:40:53.572493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.423 [2024-11-18 18:40:53.582133] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.423 [2024-11-18 18:40:53.582175] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.423 [2024-11-18 18:40:53.582194] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.582208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.423 [2024-11-18 18:40:53.582250] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.582473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.423 [2024-11-18 18:40:53.582525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.423 [2024-11-18 18:40:53.582550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.423 [2024-11-18 18:40:53.582583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.423 [2024-11-18 18:40:53.582656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.423 [2024-11-18 18:40:53.582683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.423 [2024-11-18 18:40:53.582705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.423 [2024-11-18 18:40:53.582723] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.423 [2024-11-18 18:40:53.582738] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.423 [2024-11-18 18:40:53.582750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.423 [2024-11-18 18:40:53.592292] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.423 [2024-11-18 18:40:53.592331] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.423 [2024-11-18 18:40:53.592349] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.592363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.423 [2024-11-18 18:40:53.592404] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.592577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.423 [2024-11-18 18:40:53.592634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.423 [2024-11-18 18:40:53.592692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.423 [2024-11-18 18:40:53.592727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.423 [2024-11-18 18:40:53.592778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.423 [2024-11-18 18:40:53.592805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.423 [2024-11-18 18:40:53.592825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.423 [2024-11-18 18:40:53.592844] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.423 [2024-11-18 18:40:53.592859] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.423 [2024-11-18 18:40:53.592871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.423 [2024-11-18 18:40:53.602464] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.423 [2024-11-18 18:40:53.602502] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.423 [2024-11-18 18:40:53.602521] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.602535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.423 [2024-11-18 18:40:53.602587] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:55.423 [2024-11-18 18:40:53.602829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.423 [2024-11-18 18:40:53.602867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:55.423 [2024-11-18 18:40:53.602915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:55.423 [2024-11-18 18:40:53.602954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:55.423 [2024-11-18 18:40:53.603006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:55.423 [2024-11-18 18:40:53.603035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:55.423 [2024-11-18 18:40:53.603057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:55.423 [2024-11-18 18:40:53.603077] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:55.423 [2024-11-18 18:40:53.603094] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:55.423 [2024-11-18 18:40:53.603108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:55.423 [2024-11-18 18:40:53.610042] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:55.423 [2024-11-18 18:40:53.610092] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:55.423 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.424 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.682 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.615 [2024-11-18 18:40:54.846425] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:56.615 [2024-11-18 18:40:54.846468] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:56.615 [2024-11-18 18:40:54.846522] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:56.615 [2024-11-18 18:40:54.934843] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:56.873 [2024-11-18 18:40:54.998807] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:56.873 [2024-11-18 18:40:55.000257] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:56.873 [2024-11-18 18:40:55.003114] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:56.873 [2024-11-18 18:40:55.003182] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:56.873 [2024-11-18 18:40:55.005955] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 request: 00:33:56.873 { 00:33:56.873 "name": "nvme", 00:33:56.873 "trtype": "tcp", 00:33:56.873 "traddr": "10.0.0.2", 00:33:56.873 "adrfam": "ipv4", 00:33:56.873 "trsvcid": "8009", 00:33:56.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:56.873 "wait_for_attach": true, 00:33:56.873 "method": "bdev_nvme_start_discovery", 00:33:56.873 "req_id": 1 00:33:56.873 } 00:33:56.873 Got JSON-RPC error response 00:33:56.873 response: 00:33:56.873 { 00:33:56.873 "code": -17, 00:33:56.873 "message": "File exists" 00:33:56.873 } 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 request: 00:33:56.873 { 00:33:56.873 "name": "nvme_second", 00:33:56.873 "trtype": "tcp", 00:33:56.873 "traddr": "10.0.0.2", 00:33:56.873 "adrfam": "ipv4", 00:33:56.873 "trsvcid": "8009", 00:33:56.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:56.873 "wait_for_attach": true, 00:33:56.873 "method": "bdev_nvme_start_discovery", 00:33:56.873 "req_id": 1 00:33:56.873 } 00:33:56.873 Got JSON-RPC error response 00:33:56.873 response: 00:33:56.873 { 00:33:56.873 "code": -17, 00:33:56.873 "message": "File exists" 00:33:56.873 } 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:56.873 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.874 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:56.874 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.874 18:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.248 [2024-11-18 18:40:56.198926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.248 [2024-11-18 18:40:56.198986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:58.248 [2024-11-18 18:40:56.199059] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:58.248 [2024-11-18 18:40:56.199085] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:58.248 [2024-11-18 18:40:56.199106] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:59.182 [2024-11-18 18:40:57.201475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.182 [2024-11-18 18:40:57.201563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:59.182 [2024-11-18 18:40:57.201685] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:59.182 [2024-11-18 18:40:57.201715] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:59.182 [2024-11-18 18:40:57.201736] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:00.116 [2024-11-18 18:40:58.203463] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:00.116 request: 00:34:00.116 { 00:34:00.116 "name": "nvme_second", 00:34:00.116 "trtype": "tcp", 00:34:00.116 "traddr": "10.0.0.2", 00:34:00.116 "adrfam": "ipv4", 00:34:00.116 "trsvcid": "8010", 00:34:00.116 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:00.116 "wait_for_attach": false, 00:34:00.116 "attach_timeout_ms": 3000, 00:34:00.116 "method": "bdev_nvme_start_discovery", 00:34:00.116 "req_id": 1 00:34:00.116 } 00:34:00.116 Got JSON-RPC error response 00:34:00.116 response: 00:34:00.116 { 00:34:00.116 "code": -110, 00:34:00.116 "message": "Connection timed out" 00:34:00.116 } 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3099312 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.116 rmmod nvme_tcp 00:34:00.116 rmmod nvme_fabrics 00:34:00.116 rmmod nvme_keyring 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3099161 ']' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3099161 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3099161 ']' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3099161 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3099161 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3099161' 00:34:00.116 killing process with pid 3099161 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3099161 00:34:00.116 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3099161 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.491 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:03.397 00:34:03.397 real 0m15.520s 00:34:03.397 user 0m22.929s 00:34:03.397 sys 0m3.059s 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.397 ************************************ 00:34:03.397 END TEST nvmf_host_discovery 00:34:03.397 ************************************ 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.397 ************************************ 00:34:03.397 START TEST nvmf_host_multipath_status 00:34:03.397 ************************************ 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:03.397 * Looking for test storage... 00:34:03.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.397 --rc genhtml_branch_coverage=1 00:34:03.397 --rc genhtml_function_coverage=1 00:34:03.397 --rc genhtml_legend=1 00:34:03.397 --rc geninfo_all_blocks=1 00:34:03.397 --rc geninfo_unexecuted_blocks=1 00:34:03.397 00:34:03.397 ' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.397 --rc genhtml_branch_coverage=1 00:34:03.397 --rc genhtml_function_coverage=1 00:34:03.397 --rc genhtml_legend=1 00:34:03.397 --rc geninfo_all_blocks=1 00:34:03.397 --rc geninfo_unexecuted_blocks=1 00:34:03.397 00:34:03.397 ' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.397 --rc genhtml_branch_coverage=1 00:34:03.397 --rc genhtml_function_coverage=1 00:34:03.397 --rc genhtml_legend=1 00:34:03.397 --rc geninfo_all_blocks=1 00:34:03.397 --rc geninfo_unexecuted_blocks=1 00:34:03.397 00:34:03.397 ' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.397 --rc genhtml_branch_coverage=1 00:34:03.397 --rc genhtml_function_coverage=1 00:34:03.397 --rc genhtml_legend=1 00:34:03.397 --rc geninfo_all_blocks=1 00:34:03.397 --rc geninfo_unexecuted_blocks=1 00:34:03.397 00:34:03.397 ' 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.397 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.398 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:05.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:05.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:05.306 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:05.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:05.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:34:05.606 00:34:05.606 --- 10.0.0.2 ping statistics --- 00:34:05.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.606 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:34:05.606 00:34:05.606 --- 10.0.0.1 ping statistics --- 00:34:05.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.606 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.606 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3102484 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3102484 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3102484 ']' 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.607 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:05.607 [2024-11-18 18:41:03.872120] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:05.607 [2024-11-18 18:41:03.872263] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.890 [2024-11-18 18:41:04.032017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:05.890 [2024-11-18 18:41:04.168635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.890 [2024-11-18 18:41:04.168731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.890 [2024-11-18 18:41:04.168758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.890 [2024-11-18 18:41:04.168783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.890 [2024-11-18 18:41:04.168804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.890 [2024-11-18 18:41:04.171368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.890 [2024-11-18 18:41:04.171373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3102484 00:34:06.826 18:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:06.826 [2024-11-18 18:41:05.122361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.827 18:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:07.393 Malloc0 00:34:07.393 18:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:07.651 18:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.909 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.167 [2024-11-18 18:41:06.312218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.167 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:08.426 [2024-11-18 18:41:06.584936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3102900 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3102900 /var/tmp/bdevperf.sock 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3102900 ']' 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:08.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.426 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:09.360 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.360 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:09.360 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:09.618 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:10.184 Nvme0n1 00:34:10.184 18:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:10.441 Nvme0n1 00:34:10.699 18:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:10.699 18:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:12.599 18:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:12.599 18:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:12.858 18:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:13.116 18:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:14.051 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:14.051 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:14.051 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.051 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:14.309 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.309 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:14.309 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.309 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:14.567 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:14.567 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:14.568 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.568 18:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.134 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:15.391 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.391 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:15.391 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.391 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:15.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:15.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:16.215 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:16.215 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.588 18:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:17.847 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.847 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:17.847 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.847 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:18.105 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.105 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:18.105 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.105 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.363 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.363 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.363 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.363 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.621 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.621 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.621 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.621 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:18.879 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.879 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:18.879 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:19.445 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:19.445 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:20.819 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:20.819 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:20.819 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.819 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:20.819 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.819 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:20.819 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.819 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:21.076 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.076 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:21.076 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.076 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:21.333 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.333 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:21.333 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.333 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.591 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.591 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:21.591 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.591 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.849 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.849 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:21.849 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.849 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.416 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.416 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:22.416 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:22.416 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:22.981 18:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:23.916 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:23.916 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:23.916 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.916 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:24.175 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.175 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:24.175 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.175 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:24.433 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.433 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:24.433 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.433 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.691 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.691 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.691 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.691 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.949 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.949 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:24.949 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.949 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:25.207 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.207 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:25.207 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.207 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:25.466 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.466 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:25.466 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:25.724 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:25.982 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:26.915 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:26.915 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:26.915 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.915 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:27.173 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.173 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:27.173 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.173 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.431 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.431 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.431 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.431 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.996 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.254 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.254 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:28.254 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.254 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.820 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.820 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:28.820 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:28.820 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:29.078 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.451 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:30.709 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.709 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:30.709 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.709 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:30.967 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.967 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:30.967 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.967 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.226 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.226 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:31.226 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.226 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.484 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.484 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:31.484 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.484 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:31.743 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.743 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:32.001 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:32.001 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:32.568 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:32.568 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:33.944 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:33.944 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:33.944 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.944 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:33.944 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.944 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:33.944 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.944 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:34.202 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.202 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.202 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.202 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.460 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.460 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.460 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.460 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.718 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.718 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:34.718 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.718 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:34.975 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.975 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:34.975 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.975 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.233 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.233 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:35.233 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:35.490 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:36.056 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.078 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.336 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.336 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.594 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.594 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:37.852 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.852 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:37.852 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.852 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.110 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.110 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.110 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.110 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.368 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.368 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.368 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.368 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.626 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.626 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:38.626 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:38.885 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:39.143 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:40.075 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:40.075 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:40.075 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.075 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.333 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.333 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.333 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.333 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.591 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.591 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.591 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.591 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.850 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.850 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.850 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.850 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.117 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.117 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:41.117 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.117 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.378 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.378 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:41.378 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.379 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.636 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.636 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:41.636 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:42.202 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:42.460 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:43.393 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:43.393 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:43.393 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.393 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.651 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.651 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:43.651 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.651 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.910 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:43.910 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.910 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.910 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.168 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.168 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.168 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.168 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.426 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.426 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:44.426 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.426 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.684 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.684 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:44.684 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.684 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3102900 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3102900 ']' 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3102900 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3102900 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3102900' 00:34:44.942 killing process with pid 3102900 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3102900 00:34:44.942 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3102900 00:34:45.200 { 00:34:45.200 "results": [ 00:34:45.200 { 00:34:45.200 "job": "Nvme0n1", 00:34:45.200 "core_mask": "0x4", 00:34:45.200 "workload": "verify", 00:34:45.200 "status": "terminated", 00:34:45.200 "verify_range": { 00:34:45.200 "start": 0, 00:34:45.200 "length": 16384 00:34:45.200 }, 00:34:45.200 "queue_depth": 128, 00:34:45.200 "io_size": 4096, 00:34:45.200 "runtime": 34.295951, 00:34:45.200 "iops": 5910.435316402219, 00:34:45.200 "mibps": 23.087637954696167, 00:34:45.200 "io_failed": 0, 00:34:45.200 "io_timeout": 0, 00:34:45.200 "avg_latency_us": 21618.82703852799, 00:34:45.200 "min_latency_us": 292.78814814814814, 00:34:45.200 "max_latency_us": 4026531.84 00:34:45.200 } 00:34:45.200 ], 00:34:45.200 "core_count": 1 00:34:45.200 } 00:34:45.770 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3102900 00:34:45.770 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:45.770 [2024-11-18 18:41:06.684745] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:45.770 [2024-11-18 18:41:06.684892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102900 ] 00:34:45.770 [2024-11-18 18:41:06.820502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.770 [2024-11-18 18:41:06.947396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.770 Running I/O for 90 seconds... 00:34:45.770 6229.00 IOPS, 24.33 MiB/s [2024-11-18T17:41:44.107Z] 6212.50 IOPS, 24.27 MiB/s [2024-11-18T17:41:44.107Z] 6230.33 IOPS, 24.34 MiB/s [2024-11-18T17:41:44.107Z] 6186.75 IOPS, 24.17 MiB/s [2024-11-18T17:41:44.107Z] 6183.40 IOPS, 24.15 MiB/s [2024-11-18T17:41:44.107Z] 6206.00 IOPS, 24.24 MiB/s [2024-11-18T17:41:44.107Z] 6203.57 IOPS, 24.23 MiB/s [2024-11-18T17:41:44.107Z] 6201.00 IOPS, 24.22 MiB/s [2024-11-18T17:41:44.107Z] 6216.78 IOPS, 24.28 MiB/s [2024-11-18T17:41:44.107Z] 6227.10 IOPS, 24.32 MiB/s [2024-11-18T17:41:44.107Z] 6223.55 IOPS, 24.31 MiB/s [2024-11-18T17:41:44.107Z] 6204.08 IOPS, 24.23 MiB/s [2024-11-18T17:41:44.107Z] 6200.54 IOPS, 24.22 MiB/s [2024-11-18T17:41:44.107Z] 6204.64 IOPS, 24.24 MiB/s [2024-11-18T17:41:44.107Z] [2024-11-18 18:41:23.941758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.770 [2024-11-18 18:41:23.941856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.941994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:45.770 [2024-11-18 18:41:23.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.770 [2024-11-18 18:41:23.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.942898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.942939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.942976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.943846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.943871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.771 [2024-11-18 18:41:23.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.944945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.944982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:45.771 [2024-11-18 18:41:23.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.771 [2024-11-18 18:41:23.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.945827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.945853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.945892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.945918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.945957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.945984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.946859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.946886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.948806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.948881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.948950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.948976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.949958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.949984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.950025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.950050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.950090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.950116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.950157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:45.772 [2024-11-18 18:41:23.950221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.772 [2024-11-18 18:41:23.950247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.950945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.950971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.951949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.951989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:45.773 [2024-11-18 18:41:23.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.773 [2024-11-18 18:41:23.952856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:45.773 6201.93 IOPS, 24.23 MiB/s [2024-11-18T17:41:44.110Z] 5814.31 IOPS, 22.71 MiB/s [2024-11-18T17:41:44.110Z] 5472.29 IOPS, 21.38 MiB/s [2024-11-18T17:41:44.110Z] 5168.28 IOPS, 20.19 MiB/s [2024-11-18T17:41:44.110Z] 4896.32 IOPS, 19.13 MiB/s [2024-11-18T17:41:44.110Z] 4956.15 IOPS, 19.36 MiB/s [2024-11-18T17:41:44.110Z] 5014.71 IOPS, 19.59 MiB/s [2024-11-18T17:41:44.110Z] 5093.95 IOPS, 19.90 MiB/s [2024-11-18T17:41:44.110Z] 5245.35 IOPS, 20.49 MiB/s [2024-11-18T17:41:44.110Z] 5394.25 IOPS, 21.07 MiB/s [2024-11-18T17:41:44.111Z] 5516.88 IOPS, 21.55 MiB/s [2024-11-18T17:41:44.111Z] 5544.65 IOPS, 21.66 MiB/s [2024-11-18T17:41:44.111Z] 5568.07 IOPS, 21.75 MiB/s [2024-11-18T17:41:44.111Z] 5589.50 IOPS, 21.83 MiB/s [2024-11-18T17:41:44.111Z] 5666.34 IOPS, 22.13 MiB/s [2024-11-18T17:41:44.111Z] 5763.17 IOPS, 22.51 MiB/s [2024-11-18T17:41:44.111Z] 5855.00 IOPS, 22.87 MiB/s [2024-11-18T17:41:44.111Z] [2024-11-18 18:41:40.542594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.542800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.542831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.542885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.542912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.542966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.542992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.543948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.543985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.544011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.774 [2024-11-18 18:41:40.544096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.544944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.544969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.545030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.545091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.545170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.545239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.774 [2024-11-18 18:41:40.545303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:45.774 [2024-11-18 18:41:40.545342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.545940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.545993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.546706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.546950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.546977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.547032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.547057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:45.775 [2024-11-18 18:41:40.548679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.775 [2024-11-18 18:41:40.548741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:45.775 [2024-11-18 18:41:40.548778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.776 [2024-11-18 18:41:40.548804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:45.776 [2024-11-18 18:41:40.548841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.776 [2024-11-18 18:41:40.548867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:45.776 [2024-11-18 18:41:40.548904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:45.776 [2024-11-18 18:41:40.548945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:45.776 5900.38 IOPS, 23.05 MiB/s [2024-11-18T17:41:44.113Z] 5905.18 IOPS, 23.07 MiB/s [2024-11-18T17:41:44.113Z] 5912.29 IOPS, 23.09 MiB/s [2024-11-18T17:41:44.113Z] Received shutdown signal, test time was about 34.296860 seconds 00:34:45.776 00:34:45.776 Latency(us) 00:34:45.776 [2024-11-18T17:41:44.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.776 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:45.776 Verification LBA range: start 0x0 length 0x4000 00:34:45.776 Nvme0n1 : 34.30 5910.44 23.09 0.00 0.00 21618.83 292.79 4026531.84 00:34:45.776 [2024-11-18T17:41:44.113Z] =================================================================================================================== 00:34:45.776 [2024-11-18T17:41:44.113Z] Total : 5910.44 23.09 0.00 0.00 21618.83 292.79 4026531.84 00:34:45.776 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.034 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.034 rmmod nvme_tcp 00:34:46.034 rmmod nvme_fabrics 00:34:46.293 rmmod nvme_keyring 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3102484 ']' 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3102484 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3102484 ']' 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3102484 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3102484 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3102484' 00:34:46.293 killing process with pid 3102484 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3102484 00:34:46.293 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3102484 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.667 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:49.567 00:34:49.567 real 0m46.233s 00:34:49.567 user 2m19.458s 00:34:49.567 sys 0m10.475s 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:49.567 ************************************ 00:34:49.567 END TEST nvmf_host_multipath_status 00:34:49.567 ************************************ 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.567 ************************************ 00:34:49.567 START TEST nvmf_discovery_remove_ifc 00:34:49.567 ************************************ 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:49.567 * Looking for test storage... 00:34:49.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.567 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.826 --rc genhtml_branch_coverage=1 00:34:49.826 --rc genhtml_function_coverage=1 00:34:49.826 --rc genhtml_legend=1 00:34:49.826 --rc geninfo_all_blocks=1 00:34:49.826 --rc geninfo_unexecuted_blocks=1 00:34:49.826 00:34:49.826 ' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.826 --rc genhtml_branch_coverage=1 00:34:49.826 --rc genhtml_function_coverage=1 00:34:49.826 --rc genhtml_legend=1 00:34:49.826 --rc geninfo_all_blocks=1 00:34:49.826 --rc geninfo_unexecuted_blocks=1 00:34:49.826 00:34:49.826 ' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.826 --rc genhtml_branch_coverage=1 00:34:49.826 --rc genhtml_function_coverage=1 00:34:49.826 --rc genhtml_legend=1 00:34:49.826 --rc geninfo_all_blocks=1 00:34:49.826 --rc geninfo_unexecuted_blocks=1 00:34:49.826 00:34:49.826 ' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.826 --rc genhtml_branch_coverage=1 00:34:49.826 --rc genhtml_function_coverage=1 00:34:49.826 --rc genhtml_legend=1 00:34:49.826 --rc geninfo_all_blocks=1 00:34:49.826 --rc geninfo_unexecuted_blocks=1 00:34:49.826 00:34:49.826 ' 00:34:49.826 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.827 18:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.827 18:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:51.727 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:51.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:51.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:51.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:51.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:51.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:34:51.728 00:34:51.728 --- 10.0.0.2 ping statistics --- 00:34:51.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.728 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:34:51.728 00:34:51.728 --- 10.0.0.1 ping statistics --- 00:34:51.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.728 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3109492 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3109492 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3109492 ']' 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.728 18:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.986 [2024-11-18 18:41:50.088180] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:51.986 [2024-11-18 18:41:50.088328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.987 [2024-11-18 18:41:50.235243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.245 [2024-11-18 18:41:50.360593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.245 [2024-11-18 18:41:50.360710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.245 [2024-11-18 18:41:50.360733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.245 [2024-11-18 18:41:50.360755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.245 [2024-11-18 18:41:50.360772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.245 [2024-11-18 18:41:50.362255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.811 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.811 [2024-11-18 18:41:51.130748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.811 [2024-11-18 18:41:51.139047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:53.068 null0 00:34:53.068 [2024-11-18 18:41:51.171161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3109647 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3109647 /tmp/host.sock 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3109647 ']' 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.068 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:53.069 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:53.069 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.069 18:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.069 [2024-11-18 18:41:51.281471] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:53.069 [2024-11-18 18:41:51.281635] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109647 ] 00:34:53.326 [2024-11-18 18:41:51.425229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.326 [2024-11-18 18:41:51.561879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.259 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.517 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.517 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:54.517 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.517 18:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.450 [2024-11-18 18:41:53.653057] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:55.450 [2024-11-18 18:41:53.653115] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:55.450 [2024-11-18 18:41:53.653167] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:55.450 [2024-11-18 18:41:53.779617] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:55.708 [2024-11-18 18:41:53.840667] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:55.708 [2024-11-18 18:41:53.842387] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:55.708 [2024-11-18 18:41:53.844807] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:55.708 [2024-11-18 18:41:53.844904] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:55.708 [2024-11-18 18:41:53.845006] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:55.708 [2024-11-18 18:41:53.845050] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:55.708 [2024-11-18 18:41:53.845104] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:55.708 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:55.708 [2024-11-18 18:41:53.851659] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:55.709 18:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.081 18:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.081 18:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.081 18:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:57.081 18:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.012 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:58.013 18:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:58.944 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:59.876 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.248 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.248 [2024-11-18 18:41:59.286846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:01.248 [2024-11-18 18:41:59.286967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.248 [2024-11-18 18:41:59.287004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.248 [2024-11-18 18:41:59.287039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.248 [2024-11-18 18:41:59.287064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.248 [2024-11-18 18:41:59.287088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.248 [2024-11-18 18:41:59.287113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.248 [2024-11-18 18:41:59.287137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.248 [2024-11-18 18:41:59.287160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.248 [2024-11-18 18:41:59.287185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.248 [2024-11-18 18:41:59.287209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.248 [2024-11-18 18:41:59.287232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:01.248 [2024-11-18 18:41:59.296857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:01.248 [2024-11-18 18:41:59.306913] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:01.248 [2024-11-18 18:41:59.306965] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:01.248 [2024-11-18 18:41:59.306987] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:01.248 [2024-11-18 18:41:59.307015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:01.248 [2024-11-18 18:41:59.307095] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.182 [2024-11-18 18:42:00.319684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:02.182 [2024-11-18 18:42:00.319773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:02.182 [2024-11-18 18:42:00.319811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:02.182 [2024-11-18 18:42:00.319878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:02.182 [2024-11-18 18:42:00.320722] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:02.182 [2024-11-18 18:42:00.320796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:02.182 [2024-11-18 18:42:00.320829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:02.182 [2024-11-18 18:42:00.320856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:02.182 [2024-11-18 18:42:00.320878] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:02.182 [2024-11-18 18:42:00.320915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:02.182 [2024-11-18 18:42:00.320939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:02.182 [2024-11-18 18:42:00.320982] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:02.182 [2024-11-18 18:42:00.321010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.182 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.115 [2024-11-18 18:42:01.323543] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:03.115 [2024-11-18 18:42:01.323619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:03.115 [2024-11-18 18:42:01.323672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:03.115 [2024-11-18 18:42:01.323693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:03.115 [2024-11-18 18:42:01.323714] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:03.115 [2024-11-18 18:42:01.323734] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:03.115 [2024-11-18 18:42:01.323761] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:03.115 [2024-11-18 18:42:01.323775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:03.115 [2024-11-18 18:42:01.323852] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:03.115 [2024-11-18 18:42:01.323941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.115 [2024-11-18 18:42:01.323992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.115 [2024-11-18 18:42:01.324025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.115 [2024-11-18 18:42:01.324049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.115 [2024-11-18 18:42:01.324072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.115 [2024-11-18 18:42:01.324094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.115 [2024-11-18 18:42:01.324118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.115 [2024-11-18 18:42:01.324141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.115 [2024-11-18 18:42:01.324164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.115 [2024-11-18 18:42:01.324187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.115 [2024-11-18 18:42:01.324208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:03.115 [2024-11-18 18:42:01.324299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:03.115 [2024-11-18 18:42:01.325278] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:03.115 [2024-11-18 18:42:01.325314] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.115 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.374 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:03.374 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:04.306 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.239 [2024-11-18 18:42:03.339404] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:05.239 [2024-11-18 18:42:03.339446] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:05.239 [2024-11-18 18:42:03.339491] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:05.239 [2024-11-18 18:42:03.427862] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:05.239 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.497 [2024-11-18 18:42:03.649605] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:05.497 [2024-11-18 18:42:03.651334] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:05.497 [2024-11-18 18:42:03.653516] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:05.497 [2024-11-18 18:42:03.653586] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:05.497 [2024-11-18 18:42:03.653678] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:05.497 [2024-11-18 18:42:03.653714] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:05.497 [2024-11-18 18:42:03.653740] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:05.497 [2024-11-18 18:42:03.658883] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3109647 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3109647 ']' 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3109647 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3109647 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3109647' 00:35:06.430 killing process with pid 3109647 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3109647 00:35:06.430 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3109647 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.364 rmmod nvme_tcp 00:35:07.364 rmmod nvme_fabrics 00:35:07.364 rmmod nvme_keyring 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3109492 ']' 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3109492 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3109492 ']' 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3109492 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3109492 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3109492' 00:35:07.364 killing process with pid 3109492 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3109492 00:35:07.364 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3109492 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.792 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.692 00:35:10.692 real 0m20.983s 00:35:10.692 user 0m31.127s 00:35:10.692 sys 0m3.147s 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.692 ************************************ 00:35:10.692 END TEST nvmf_discovery_remove_ifc 00:35:10.692 ************************************ 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.692 ************************************ 00:35:10.692 START TEST nvmf_identify_kernel_target 00:35:10.692 ************************************ 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:10.692 * Looking for test storage... 00:35:10.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.692 18:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:10.692 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.693 --rc genhtml_branch_coverage=1 00:35:10.693 --rc genhtml_function_coverage=1 00:35:10.693 --rc genhtml_legend=1 00:35:10.693 --rc geninfo_all_blocks=1 00:35:10.693 --rc geninfo_unexecuted_blocks=1 00:35:10.693 00:35:10.693 ' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.693 --rc genhtml_branch_coverage=1 00:35:10.693 --rc genhtml_function_coverage=1 00:35:10.693 --rc genhtml_legend=1 00:35:10.693 --rc geninfo_all_blocks=1 00:35:10.693 --rc geninfo_unexecuted_blocks=1 00:35:10.693 00:35:10.693 ' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.693 --rc genhtml_branch_coverage=1 00:35:10.693 --rc genhtml_function_coverage=1 00:35:10.693 --rc genhtml_legend=1 00:35:10.693 --rc geninfo_all_blocks=1 00:35:10.693 --rc geninfo_unexecuted_blocks=1 00:35:10.693 00:35:10.693 ' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.693 --rc genhtml_branch_coverage=1 00:35:10.693 --rc genhtml_function_coverage=1 00:35:10.693 --rc genhtml_legend=1 00:35:10.693 --rc geninfo_all_blocks=1 00:35:10.693 --rc geninfo_unexecuted_blocks=1 00:35:10.693 00:35:10.693 ' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.693 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:10.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.951 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.952 18:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:12.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:12.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:12.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:12.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.853 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.854 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:35:13.112 00:35:13.112 --- 10.0.0.2 ping statistics --- 00:35:13.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.112 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:35:13.112 00:35:13.112 --- 10.0.0.1 ping statistics --- 00:35:13.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.112 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:13.112 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:13.113 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:13.113 18:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:14.047 Waiting for block devices as requested 00:35:14.305 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:14.305 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.305 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:14.564 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:14.564 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:14.564 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:14.822 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:14.822 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:14.822 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:14.822 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.822 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:15.080 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:15.080 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:15.080 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:15.080 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:15.338 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:15.338 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:15.338 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:15.597 No valid GPT data, bailing 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:15.597 00:35:15.597 Discovery Log Number of Records 2, Generation counter 2 00:35:15.597 =====Discovery Log Entry 0====== 00:35:15.597 trtype: tcp 00:35:15.597 adrfam: ipv4 00:35:15.597 subtype: current discovery subsystem 00:35:15.597 treq: not specified, sq flow control disable supported 00:35:15.597 portid: 1 00:35:15.597 trsvcid: 4420 00:35:15.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:15.597 traddr: 10.0.0.1 00:35:15.597 eflags: none 00:35:15.597 sectype: none 00:35:15.597 =====Discovery Log Entry 1====== 00:35:15.597 trtype: tcp 00:35:15.597 adrfam: ipv4 00:35:15.597 subtype: nvme subsystem 00:35:15.597 treq: not specified, sq flow control disable supported 00:35:15.597 portid: 1 00:35:15.597 trsvcid: 4420 00:35:15.597 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:15.597 traddr: 10.0.0.1 00:35:15.597 eflags: none 00:35:15.597 sectype: none 00:35:15.597 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:15.597 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:15.857 ===================================================== 00:35:15.857 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:15.857 ===================================================== 00:35:15.857 Controller Capabilities/Features 00:35:15.857 ================================ 00:35:15.857 Vendor ID: 0000 00:35:15.857 Subsystem Vendor ID: 0000 00:35:15.857 Serial Number: 74938fe80e94e2c023ec 00:35:15.857 Model Number: Linux 00:35:15.857 Firmware Version: 6.8.9-20 00:35:15.857 Recommended Arb Burst: 0 00:35:15.857 IEEE OUI Identifier: 00 00 00 00:35:15.857 Multi-path I/O 00:35:15.857 May have multiple subsystem ports: No 00:35:15.857 May have multiple controllers: No 00:35:15.857 Associated with SR-IOV VF: No 00:35:15.857 Max Data Transfer Size: Unlimited 00:35:15.857 Max Number of Namespaces: 0 00:35:15.857 Max Number of I/O Queues: 1024 00:35:15.857 NVMe Specification Version (VS): 1.3 00:35:15.857 NVMe Specification Version (Identify): 1.3 00:35:15.857 Maximum Queue Entries: 1024 00:35:15.857 Contiguous Queues Required: No 00:35:15.857 Arbitration Mechanisms Supported 00:35:15.857 Weighted Round Robin: Not Supported 00:35:15.857 Vendor Specific: Not Supported 00:35:15.857 Reset Timeout: 7500 ms 00:35:15.857 Doorbell Stride: 4 bytes 00:35:15.857 NVM Subsystem Reset: Not Supported 00:35:15.857 Command Sets Supported 00:35:15.857 NVM Command Set: Supported 00:35:15.857 Boot Partition: Not Supported 00:35:15.857 Memory Page Size Minimum: 4096 bytes 00:35:15.857 Memory Page Size Maximum: 4096 bytes 00:35:15.857 Persistent Memory Region: Not Supported 00:35:15.857 Optional Asynchronous Events Supported 00:35:15.857 Namespace Attribute Notices: Not Supported 00:35:15.857 Firmware Activation Notices: Not Supported 00:35:15.857 ANA Change Notices: Not Supported 00:35:15.857 PLE Aggregate Log Change Notices: Not Supported 00:35:15.857 LBA Status Info Alert Notices: Not Supported 00:35:15.857 EGE Aggregate Log Change Notices: Not Supported 00:35:15.857 Normal NVM Subsystem Shutdown event: Not Supported 00:35:15.857 Zone Descriptor Change Notices: Not Supported 00:35:15.857 Discovery Log Change Notices: Supported 00:35:15.857 Controller Attributes 00:35:15.857 128-bit Host Identifier: Not Supported 00:35:15.857 Non-Operational Permissive Mode: Not Supported 00:35:15.857 NVM Sets: Not Supported 00:35:15.857 Read Recovery Levels: Not Supported 00:35:15.857 Endurance Groups: Not Supported 00:35:15.857 Predictable Latency Mode: Not Supported 00:35:15.857 Traffic Based Keep ALive: Not Supported 00:35:15.857 Namespace Granularity: Not Supported 00:35:15.857 SQ Associations: Not Supported 00:35:15.857 UUID List: Not Supported 00:35:15.857 Multi-Domain Subsystem: Not Supported 00:35:15.857 Fixed Capacity Management: Not Supported 00:35:15.857 Variable Capacity Management: Not Supported 00:35:15.857 Delete Endurance Group: Not Supported 00:35:15.857 Delete NVM Set: Not Supported 00:35:15.857 Extended LBA Formats Supported: Not Supported 00:35:15.857 Flexible Data Placement Supported: Not Supported 00:35:15.857 00:35:15.857 Controller Memory Buffer Support 00:35:15.857 ================================ 00:35:15.857 Supported: No 00:35:15.857 00:35:15.857 Persistent Memory Region Support 00:35:15.857 ================================ 00:35:15.857 Supported: No 00:35:15.857 00:35:15.857 Admin Command Set Attributes 00:35:15.857 ============================ 00:35:15.857 Security Send/Receive: Not Supported 00:35:15.857 Format NVM: Not Supported 00:35:15.857 Firmware Activate/Download: Not Supported 00:35:15.857 Namespace Management: Not Supported 00:35:15.857 Device Self-Test: Not Supported 00:35:15.857 Directives: Not Supported 00:35:15.857 NVMe-MI: Not Supported 00:35:15.857 Virtualization Management: Not Supported 00:35:15.857 Doorbell Buffer Config: Not Supported 00:35:15.857 Get LBA Status Capability: Not Supported 00:35:15.857 Command & Feature Lockdown Capability: Not Supported 00:35:15.857 Abort Command Limit: 1 00:35:15.857 Async Event Request Limit: 1 00:35:15.857 Number of Firmware Slots: N/A 00:35:15.857 Firmware Slot 1 Read-Only: N/A 00:35:15.857 Firmware Activation Without Reset: N/A 00:35:15.857 Multiple Update Detection Support: N/A 00:35:15.857 Firmware Update Granularity: No Information Provided 00:35:15.857 Per-Namespace SMART Log: No 00:35:15.857 Asymmetric Namespace Access Log Page: Not Supported 00:35:15.857 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:15.857 Command Effects Log Page: Not Supported 00:35:15.857 Get Log Page Extended Data: Supported 00:35:15.857 Telemetry Log Pages: Not Supported 00:35:15.857 Persistent Event Log Pages: Not Supported 00:35:15.857 Supported Log Pages Log Page: May Support 00:35:15.857 Commands Supported & Effects Log Page: Not Supported 00:35:15.857 Feature Identifiers & Effects Log Page:May Support 00:35:15.857 NVMe-MI Commands & Effects Log Page: May Support 00:35:15.857 Data Area 4 for Telemetry Log: Not Supported 00:35:15.857 Error Log Page Entries Supported: 1 00:35:15.857 Keep Alive: Not Supported 00:35:15.857 00:35:15.857 NVM Command Set Attributes 00:35:15.857 ========================== 00:35:15.857 Submission Queue Entry Size 00:35:15.857 Max: 1 00:35:15.857 Min: 1 00:35:15.857 Completion Queue Entry Size 00:35:15.857 Max: 1 00:35:15.857 Min: 1 00:35:15.857 Number of Namespaces: 0 00:35:15.857 Compare Command: Not Supported 00:35:15.857 Write Uncorrectable Command: Not Supported 00:35:15.857 Dataset Management Command: Not Supported 00:35:15.857 Write Zeroes Command: Not Supported 00:35:15.857 Set Features Save Field: Not Supported 00:35:15.857 Reservations: Not Supported 00:35:15.857 Timestamp: Not Supported 00:35:15.857 Copy: Not Supported 00:35:15.857 Volatile Write Cache: Not Present 00:35:15.857 Atomic Write Unit (Normal): 1 00:35:15.857 Atomic Write Unit (PFail): 1 00:35:15.857 Atomic Compare & Write Unit: 1 00:35:15.857 Fused Compare & Write: Not Supported 00:35:15.857 Scatter-Gather List 00:35:15.857 SGL Command Set: Supported 00:35:15.857 SGL Keyed: Not Supported 00:35:15.857 SGL Bit Bucket Descriptor: Not Supported 00:35:15.857 SGL Metadata Pointer: Not Supported 00:35:15.857 Oversized SGL: Not Supported 00:35:15.857 SGL Metadata Address: Not Supported 00:35:15.857 SGL Offset: Supported 00:35:15.857 Transport SGL Data Block: Not Supported 00:35:15.857 Replay Protected Memory Block: Not Supported 00:35:15.857 00:35:15.857 Firmware Slot Information 00:35:15.857 ========================= 00:35:15.857 Active slot: 0 00:35:15.857 00:35:15.857 00:35:15.857 Error Log 00:35:15.857 ========= 00:35:15.857 00:35:15.857 Active Namespaces 00:35:15.857 ================= 00:35:15.857 Discovery Log Page 00:35:15.857 ================== 00:35:15.857 Generation Counter: 2 00:35:15.857 Number of Records: 2 00:35:15.857 Record Format: 0 00:35:15.857 00:35:15.857 Discovery Log Entry 0 00:35:15.857 ---------------------- 00:35:15.857 Transport Type: 3 (TCP) 00:35:15.857 Address Family: 1 (IPv4) 00:35:15.857 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:15.857 Entry Flags: 00:35:15.857 Duplicate Returned Information: 0 00:35:15.857 Explicit Persistent Connection Support for Discovery: 0 00:35:15.857 Transport Requirements: 00:35:15.857 Secure Channel: Not Specified 00:35:15.857 Port ID: 1 (0x0001) 00:35:15.857 Controller ID: 65535 (0xffff) 00:35:15.857 Admin Max SQ Size: 32 00:35:15.857 Transport Service Identifier: 4420 00:35:15.857 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:15.857 Transport Address: 10.0.0.1 00:35:15.857 Discovery Log Entry 1 00:35:15.857 ---------------------- 00:35:15.857 Transport Type: 3 (TCP) 00:35:15.857 Address Family: 1 (IPv4) 00:35:15.857 Subsystem Type: 2 (NVM Subsystem) 00:35:15.857 Entry Flags: 00:35:15.857 Duplicate Returned Information: 0 00:35:15.857 Explicit Persistent Connection Support for Discovery: 0 00:35:15.857 Transport Requirements: 00:35:15.857 Secure Channel: Not Specified 00:35:15.857 Port ID: 1 (0x0001) 00:35:15.857 Controller ID: 65535 (0xffff) 00:35:15.857 Admin Max SQ Size: 32 00:35:15.857 Transport Service Identifier: 4420 00:35:15.857 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:15.857 Transport Address: 10.0.0.1 00:35:15.858 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:16.117 get_feature(0x01) failed 00:35:16.117 get_feature(0x02) failed 00:35:16.117 get_feature(0x04) failed 00:35:16.117 ===================================================== 00:35:16.117 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.117 ===================================================== 00:35:16.117 Controller Capabilities/Features 00:35:16.117 ================================ 00:35:16.117 Vendor ID: 0000 00:35:16.117 Subsystem Vendor ID: 0000 00:35:16.117 Serial Number: c0316d3fd1a402eafde1 00:35:16.117 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:16.117 Firmware Version: 6.8.9-20 00:35:16.117 Recommended Arb Burst: 6 00:35:16.117 IEEE OUI Identifier: 00 00 00 00:35:16.117 Multi-path I/O 00:35:16.117 May have multiple subsystem ports: Yes 00:35:16.117 May have multiple controllers: Yes 00:35:16.117 Associated with SR-IOV VF: No 00:35:16.117 Max Data Transfer Size: Unlimited 00:35:16.117 Max Number of Namespaces: 1024 00:35:16.117 Max Number of I/O Queues: 128 00:35:16.117 NVMe Specification Version (VS): 1.3 00:35:16.117 NVMe Specification Version (Identify): 1.3 00:35:16.117 Maximum Queue Entries: 1024 00:35:16.117 Contiguous Queues Required: No 00:35:16.117 Arbitration Mechanisms Supported 00:35:16.117 Weighted Round Robin: Not Supported 00:35:16.117 Vendor Specific: Not Supported 00:35:16.117 Reset Timeout: 7500 ms 00:35:16.117 Doorbell Stride: 4 bytes 00:35:16.117 NVM Subsystem Reset: Not Supported 00:35:16.117 Command Sets Supported 00:35:16.117 NVM Command Set: Supported 00:35:16.117 Boot Partition: Not Supported 00:35:16.117 Memory Page Size Minimum: 4096 bytes 00:35:16.117 Memory Page Size Maximum: 4096 bytes 00:35:16.117 Persistent Memory Region: Not Supported 00:35:16.117 Optional Asynchronous Events Supported 00:35:16.117 Namespace Attribute Notices: Supported 00:35:16.117 Firmware Activation Notices: Not Supported 00:35:16.117 ANA Change Notices: Supported 00:35:16.117 PLE Aggregate Log Change Notices: Not Supported 00:35:16.117 LBA Status Info Alert Notices: Not Supported 00:35:16.117 EGE Aggregate Log Change Notices: Not Supported 00:35:16.117 Normal NVM Subsystem Shutdown event: Not Supported 00:35:16.117 Zone Descriptor Change Notices: Not Supported 00:35:16.117 Discovery Log Change Notices: Not Supported 00:35:16.117 Controller Attributes 00:35:16.117 128-bit Host Identifier: Supported 00:35:16.117 Non-Operational Permissive Mode: Not Supported 00:35:16.117 NVM Sets: Not Supported 00:35:16.117 Read Recovery Levels: Not Supported 00:35:16.117 Endurance Groups: Not Supported 00:35:16.117 Predictable Latency Mode: Not Supported 00:35:16.117 Traffic Based Keep ALive: Supported 00:35:16.117 Namespace Granularity: Not Supported 00:35:16.117 SQ Associations: Not Supported 00:35:16.117 UUID List: Not Supported 00:35:16.117 Multi-Domain Subsystem: Not Supported 00:35:16.117 Fixed Capacity Management: Not Supported 00:35:16.117 Variable Capacity Management: Not Supported 00:35:16.117 Delete Endurance Group: Not Supported 00:35:16.117 Delete NVM Set: Not Supported 00:35:16.117 Extended LBA Formats Supported: Not Supported 00:35:16.117 Flexible Data Placement Supported: Not Supported 00:35:16.117 00:35:16.117 Controller Memory Buffer Support 00:35:16.117 ================================ 00:35:16.117 Supported: No 00:35:16.117 00:35:16.117 Persistent Memory Region Support 00:35:16.117 ================================ 00:35:16.117 Supported: No 00:35:16.117 00:35:16.117 Admin Command Set Attributes 00:35:16.117 ============================ 00:35:16.117 Security Send/Receive: Not Supported 00:35:16.117 Format NVM: Not Supported 00:35:16.117 Firmware Activate/Download: Not Supported 00:35:16.117 Namespace Management: Not Supported 00:35:16.117 Device Self-Test: Not Supported 00:35:16.117 Directives: Not Supported 00:35:16.117 NVMe-MI: Not Supported 00:35:16.117 Virtualization Management: Not Supported 00:35:16.117 Doorbell Buffer Config: Not Supported 00:35:16.117 Get LBA Status Capability: Not Supported 00:35:16.117 Command & Feature Lockdown Capability: Not Supported 00:35:16.117 Abort Command Limit: 4 00:35:16.117 Async Event Request Limit: 4 00:35:16.117 Number of Firmware Slots: N/A 00:35:16.117 Firmware Slot 1 Read-Only: N/A 00:35:16.117 Firmware Activation Without Reset: N/A 00:35:16.117 Multiple Update Detection Support: N/A 00:35:16.117 Firmware Update Granularity: No Information Provided 00:35:16.117 Per-Namespace SMART Log: Yes 00:35:16.117 Asymmetric Namespace Access Log Page: Supported 00:35:16.117 ANA Transition Time : 10 sec 00:35:16.117 00:35:16.117 Asymmetric Namespace Access Capabilities 00:35:16.117 ANA Optimized State : Supported 00:35:16.117 ANA Non-Optimized State : Supported 00:35:16.117 ANA Inaccessible State : Supported 00:35:16.117 ANA Persistent Loss State : Supported 00:35:16.117 ANA Change State : Supported 00:35:16.117 ANAGRPID is not changed : No 00:35:16.117 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:16.117 00:35:16.117 ANA Group Identifier Maximum : 128 00:35:16.117 Number of ANA Group Identifiers : 128 00:35:16.118 Max Number of Allowed Namespaces : 1024 00:35:16.118 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:16.118 Command Effects Log Page: Supported 00:35:16.118 Get Log Page Extended Data: Supported 00:35:16.118 Telemetry Log Pages: Not Supported 00:35:16.118 Persistent Event Log Pages: Not Supported 00:35:16.118 Supported Log Pages Log Page: May Support 00:35:16.118 Commands Supported & Effects Log Page: Not Supported 00:35:16.118 Feature Identifiers & Effects Log Page:May Support 00:35:16.118 NVMe-MI Commands & Effects Log Page: May Support 00:35:16.118 Data Area 4 for Telemetry Log: Not Supported 00:35:16.118 Error Log Page Entries Supported: 128 00:35:16.118 Keep Alive: Supported 00:35:16.118 Keep Alive Granularity: 1000 ms 00:35:16.118 00:35:16.118 NVM Command Set Attributes 00:35:16.118 ========================== 00:35:16.118 Submission Queue Entry Size 00:35:16.118 Max: 64 00:35:16.118 Min: 64 00:35:16.118 Completion Queue Entry Size 00:35:16.118 Max: 16 00:35:16.118 Min: 16 00:35:16.118 Number of Namespaces: 1024 00:35:16.118 Compare Command: Not Supported 00:35:16.118 Write Uncorrectable Command: Not Supported 00:35:16.118 Dataset Management Command: Supported 00:35:16.118 Write Zeroes Command: Supported 00:35:16.118 Set Features Save Field: Not Supported 00:35:16.118 Reservations: Not Supported 00:35:16.118 Timestamp: Not Supported 00:35:16.118 Copy: Not Supported 00:35:16.118 Volatile Write Cache: Present 00:35:16.118 Atomic Write Unit (Normal): 1 00:35:16.118 Atomic Write Unit (PFail): 1 00:35:16.118 Atomic Compare & Write Unit: 1 00:35:16.118 Fused Compare & Write: Not Supported 00:35:16.118 Scatter-Gather List 00:35:16.118 SGL Command Set: Supported 00:35:16.118 SGL Keyed: Not Supported 00:35:16.118 SGL Bit Bucket Descriptor: Not Supported 00:35:16.118 SGL Metadata Pointer: Not Supported 00:35:16.118 Oversized SGL: Not Supported 00:35:16.118 SGL Metadata Address: Not Supported 00:35:16.118 SGL Offset: Supported 00:35:16.118 Transport SGL Data Block: Not Supported 00:35:16.118 Replay Protected Memory Block: Not Supported 00:35:16.118 00:35:16.118 Firmware Slot Information 00:35:16.118 ========================= 00:35:16.118 Active slot: 0 00:35:16.118 00:35:16.118 Asymmetric Namespace Access 00:35:16.118 =========================== 00:35:16.118 Change Count : 0 00:35:16.118 Number of ANA Group Descriptors : 1 00:35:16.118 ANA Group Descriptor : 0 00:35:16.118 ANA Group ID : 1 00:35:16.118 Number of NSID Values : 1 00:35:16.118 Change Count : 0 00:35:16.118 ANA State : 1 00:35:16.118 Namespace Identifier : 1 00:35:16.118 00:35:16.118 Commands Supported and Effects 00:35:16.118 ============================== 00:35:16.118 Admin Commands 00:35:16.118 -------------- 00:35:16.118 Get Log Page (02h): Supported 00:35:16.118 Identify (06h): Supported 00:35:16.118 Abort (08h): Supported 00:35:16.118 Set Features (09h): Supported 00:35:16.118 Get Features (0Ah): Supported 00:35:16.118 Asynchronous Event Request (0Ch): Supported 00:35:16.118 Keep Alive (18h): Supported 00:35:16.118 I/O Commands 00:35:16.118 ------------ 00:35:16.118 Flush (00h): Supported 00:35:16.118 Write (01h): Supported LBA-Change 00:35:16.118 Read (02h): Supported 00:35:16.118 Write Zeroes (08h): Supported LBA-Change 00:35:16.118 Dataset Management (09h): Supported 00:35:16.118 00:35:16.118 Error Log 00:35:16.118 ========= 00:35:16.118 Entry: 0 00:35:16.118 Error Count: 0x3 00:35:16.118 Submission Queue Id: 0x0 00:35:16.118 Command Id: 0x5 00:35:16.118 Phase Bit: 0 00:35:16.118 Status Code: 0x2 00:35:16.118 Status Code Type: 0x0 00:35:16.118 Do Not Retry: 1 00:35:16.118 Error Location: 0x28 00:35:16.118 LBA: 0x0 00:35:16.118 Namespace: 0x0 00:35:16.118 Vendor Log Page: 0x0 00:35:16.118 ----------- 00:35:16.118 Entry: 1 00:35:16.118 Error Count: 0x2 00:35:16.118 Submission Queue Id: 0x0 00:35:16.118 Command Id: 0x5 00:35:16.118 Phase Bit: 0 00:35:16.118 Status Code: 0x2 00:35:16.118 Status Code Type: 0x0 00:35:16.118 Do Not Retry: 1 00:35:16.118 Error Location: 0x28 00:35:16.118 LBA: 0x0 00:35:16.118 Namespace: 0x0 00:35:16.118 Vendor Log Page: 0x0 00:35:16.118 ----------- 00:35:16.118 Entry: 2 00:35:16.118 Error Count: 0x1 00:35:16.118 Submission Queue Id: 0x0 00:35:16.118 Command Id: 0x4 00:35:16.118 Phase Bit: 0 00:35:16.118 Status Code: 0x2 00:35:16.118 Status Code Type: 0x0 00:35:16.118 Do Not Retry: 1 00:35:16.118 Error Location: 0x28 00:35:16.118 LBA: 0x0 00:35:16.118 Namespace: 0x0 00:35:16.118 Vendor Log Page: 0x0 00:35:16.118 00:35:16.118 Number of Queues 00:35:16.118 ================ 00:35:16.118 Number of I/O Submission Queues: 128 00:35:16.118 Number of I/O Completion Queues: 128 00:35:16.118 00:35:16.118 ZNS Specific Controller Data 00:35:16.118 ============================ 00:35:16.118 Zone Append Size Limit: 0 00:35:16.118 00:35:16.118 00:35:16.118 Active Namespaces 00:35:16.118 ================= 00:35:16.119 get_feature(0x05) failed 00:35:16.119 Namespace ID:1 00:35:16.119 Command Set Identifier: NVM (00h) 00:35:16.119 Deallocate: Supported 00:35:16.119 Deallocated/Unwritten Error: Not Supported 00:35:16.119 Deallocated Read Value: Unknown 00:35:16.119 Deallocate in Write Zeroes: Not Supported 00:35:16.119 Deallocated Guard Field: 0xFFFF 00:35:16.119 Flush: Supported 00:35:16.119 Reservation: Not Supported 00:35:16.119 Namespace Sharing Capabilities: Multiple Controllers 00:35:16.119 Size (in LBAs): 1953525168 (931GiB) 00:35:16.119 Capacity (in LBAs): 1953525168 (931GiB) 00:35:16.119 Utilization (in LBAs): 1953525168 (931GiB) 00:35:16.119 UUID: 5a5ca335-d17b-4703-ab60-c67c049a2cf9 00:35:16.119 Thin Provisioning: Not Supported 00:35:16.119 Per-NS Atomic Units: Yes 00:35:16.119 Atomic Boundary Size (Normal): 0 00:35:16.119 Atomic Boundary Size (PFail): 0 00:35:16.119 Atomic Boundary Offset: 0 00:35:16.119 NGUID/EUI64 Never Reused: No 00:35:16.119 ANA group ID: 1 00:35:16.119 Namespace Write Protected: No 00:35:16.119 Number of LBA Formats: 1 00:35:16.119 Current LBA Format: LBA Format #00 00:35:16.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:16.119 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.119 rmmod nvme_tcp 00:35:16.119 rmmod nvme_fabrics 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.119 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:18.019 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:18.277 18:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:19.211 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:19.211 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:19.211 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:19.211 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:19.211 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:19.211 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:19.469 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:19.469 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:19.469 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:20.404 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:20.404 00:35:20.404 real 0m9.790s 00:35:20.404 user 0m2.220s 00:35:20.404 sys 0m3.588s 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:20.404 ************************************ 00:35:20.404 END TEST nvmf_identify_kernel_target 00:35:20.404 ************************************ 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.404 ************************************ 00:35:20.404 START TEST nvmf_auth_host 00:35:20.404 ************************************ 00:35:20.404 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:20.663 * Looking for test storage... 00:35:20.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:20.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.663 --rc genhtml_branch_coverage=1 00:35:20.663 --rc genhtml_function_coverage=1 00:35:20.663 --rc genhtml_legend=1 00:35:20.663 --rc geninfo_all_blocks=1 00:35:20.663 --rc geninfo_unexecuted_blocks=1 00:35:20.663 00:35:20.663 ' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:20.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.663 --rc genhtml_branch_coverage=1 00:35:20.663 --rc genhtml_function_coverage=1 00:35:20.663 --rc genhtml_legend=1 00:35:20.663 --rc geninfo_all_blocks=1 00:35:20.663 --rc geninfo_unexecuted_blocks=1 00:35:20.663 00:35:20.663 ' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:20.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.663 --rc genhtml_branch_coverage=1 00:35:20.663 --rc genhtml_function_coverage=1 00:35:20.663 --rc genhtml_legend=1 00:35:20.663 --rc geninfo_all_blocks=1 00:35:20.663 --rc geninfo_unexecuted_blocks=1 00:35:20.663 00:35:20.663 ' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:20.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.663 --rc genhtml_branch_coverage=1 00:35:20.663 --rc genhtml_function_coverage=1 00:35:20.663 --rc genhtml_legend=1 00:35:20.663 --rc geninfo_all_blocks=1 00:35:20.663 --rc geninfo_unexecuted_blocks=1 00:35:20.663 00:35:20.663 ' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.663 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:20.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:20.664 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:22.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:22.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:22.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.565 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:22.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.566 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:35:22.825 00:35:22.825 --- 10.0.0.2 ping statistics --- 00:35:22.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.825 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:35:22.825 00:35:22.825 --- 10.0.0.1 ping statistics --- 00:35:22.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.825 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3117754 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3117754 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3117754 ']' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.825 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=086f6d6390d6360874f36ed24e54a407 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lDi 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 086f6d6390d6360874f36ed24e54a407 0 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 086f6d6390d6360874f36ed24e54a407 0 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=086f6d6390d6360874f36ed24e54a407 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lDi 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lDi 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lDi 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3a9924b135417bbe59979fba1cbfce37772cedee256d8f48e75d8830fbf4261e 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0yC 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3a9924b135417bbe59979fba1cbfce37772cedee256d8f48e75d8830fbf4261e 3 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3a9924b135417bbe59979fba1cbfce37772cedee256d8f48e75d8830fbf4261e 3 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3a9924b135417bbe59979fba1cbfce37772cedee256d8f48e75d8830fbf4261e 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:23.761 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0yC 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0yC 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0yC 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d7bb92a2013d3910b3f1994a56babc2bb65623912d5a33d 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Esb 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d7bb92a2013d3910b3f1994a56babc2bb65623912d5a33d 0 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d7bb92a2013d3910b3f1994a56babc2bb65623912d5a33d 0 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d7bb92a2013d3910b3f1994a56babc2bb65623912d5a33d 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Esb 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Esb 00:35:24.020 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Esb 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ceebeec325523f8bf746ac29c298719d5d269ff19c50a81 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9g3 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ceebeec325523f8bf746ac29c298719d5d269ff19c50a81 2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ceebeec325523f8bf746ac29c298719d5d269ff19c50a81 2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ceebeec325523f8bf746ac29c298719d5d269ff19c50a81 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9g3 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9g3 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9g3 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a6ee107064cd3df502ad1feccb4af71 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.81l 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a6ee107064cd3df502ad1feccb4af71 1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a6ee107064cd3df502ad1feccb4af71 1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a6ee107064cd3df502ad1feccb4af71 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.81l 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.81l 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.81l 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13bcd1e52a59894c2253485144ce0728 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.85d 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13bcd1e52a59894c2253485144ce0728 1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13bcd1e52a59894c2253485144ce0728 1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13bcd1e52a59894c2253485144ce0728 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.85d 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.85d 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.85d 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e4d3e46e8ea0f811df71487b9c5a6e7e9cf31b871fef9bf 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bnv 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e4d3e46e8ea0f811df71487b9c5a6e7e9cf31b871fef9bf 2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e4d3e46e8ea0f811df71487b9c5a6e7e9cf31b871fef9bf 2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e4d3e46e8ea0f811df71487b9c5a6e7e9cf31b871fef9bf 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:24.021 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bnv 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bnv 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Bnv 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d4364571662feda76a93d286724b451 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oop 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d4364571662feda76a93d286724b451 0 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d4364571662feda76a93d286724b451 0 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d4364571662feda76a93d286724b451 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oop 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oop 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oop 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ee1e8556b510d3befa94f173dbb43b848708144c556b67f85ad7352908b71af 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JJK 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ee1e8556b510d3befa94f173dbb43b848708144c556b67f85ad7352908b71af 3 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ee1e8556b510d3befa94f173dbb43b848708144c556b67f85ad7352908b71af 3 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ee1e8556b510d3befa94f173dbb43b848708144c556b67f85ad7352908b71af 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JJK 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JJK 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JJK 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3117754 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3117754 ']' 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.280 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lDi 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.539 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0yC ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0yC 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Esb 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9g3 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9g3 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.81l 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.85d ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.85d 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Bnv 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oop ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oop 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JJK 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:24.540 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:25.473 Waiting for block devices as requested 00:35:25.732 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:25.732 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:25.991 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:25.991 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:25.991 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:25.991 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:26.249 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:26.249 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:26.249 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:26.249 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:26.507 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:26.507 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:26.507 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:26.507 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:26.766 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:26.766 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:26.766 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:27.333 No valid GPT data, bailing 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:27.333 00:35:27.333 Discovery Log Number of Records 2, Generation counter 2 00:35:27.333 =====Discovery Log Entry 0====== 00:35:27.333 trtype: tcp 00:35:27.333 adrfam: ipv4 00:35:27.333 subtype: current discovery subsystem 00:35:27.333 treq: not specified, sq flow control disable supported 00:35:27.333 portid: 1 00:35:27.333 trsvcid: 4420 00:35:27.333 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:27.333 traddr: 10.0.0.1 00:35:27.333 eflags: none 00:35:27.333 sectype: none 00:35:27.333 =====Discovery Log Entry 1====== 00:35:27.333 trtype: tcp 00:35:27.333 adrfam: ipv4 00:35:27.333 subtype: nvme subsystem 00:35:27.333 treq: not specified, sq flow control disable supported 00:35:27.333 portid: 1 00:35:27.333 trsvcid: 4420 00:35:27.333 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:27.333 traddr: 10.0.0.1 00:35:27.333 eflags: none 00:35:27.333 sectype: none 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.334 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.592 nvme0n1 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.592 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.851 nvme0n1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.851 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.110 nvme0n1 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.110 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.369 nvme0n1 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.369 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.370 nvme0n1 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.370 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.629 nvme0n1 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:28.629 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.888 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.888 nvme0n1 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.888 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 nvme0n1 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.147 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.405 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.405 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.405 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.405 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.406 nvme0n1 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.406 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.665 nvme0n1 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.665 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.923 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.924 nvme0n1 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.924 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.182 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.441 nvme0n1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.441 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.699 nvme0n1 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.699 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.699 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.700 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.270 nvme0n1 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.270 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.271 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.271 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.271 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.529 nvme0n1 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.529 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.788 nvme0n1 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.788 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.354 nvme0n1 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.354 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.613 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.179 nvme0n1 00:35:33.179 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.179 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.179 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.180 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.746 nvme0n1 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:33.746 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.747 18:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.313 nvme0n1 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.313 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.880 nvme0n1 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.880 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.811 nvme0n1 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.811 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.070 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 nvme0n1 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.002 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.003 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.937 nvme0n1 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.937 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.966 nvme0n1 00:35:38.966 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.966 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.966 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.966 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.966 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.245 18:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.176 nvme0n1 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:40.176 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.177 nvme0n1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.177 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.436 nvme0n1 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.436 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.695 nvme0n1 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.695 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.696 18:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 nvme0n1 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.954 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.213 nvme0n1 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.213 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.472 nvme0n1 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.472 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.473 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.731 nvme0n1 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.731 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.732 18:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 nvme0n1 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.990 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.248 nvme0n1 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.248 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.249 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.507 nvme0n1 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.507 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.508 18:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.766 nvme0n1 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.766 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.024 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.282 nvme0n1 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.282 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.283 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.541 nvme0n1 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.541 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.542 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.799 nvme0n1 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.799 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.800 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.057 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.316 nvme0n1 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.317 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.883 nvme0n1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.883 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.449 nvme0n1 00:35:45.449 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.449 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.449 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.449 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.450 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.016 nvme0n1 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.016 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.581 nvme0n1 00:35:46.581 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.581 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.582 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.582 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.582 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.582 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.839 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.840 18:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.406 nvme0n1 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.406 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.407 18:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 nvme0n1 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.341 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.342 18:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.276 nvme0n1 00:35:49.276 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.276 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.276 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.276 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.276 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.534 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.534 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.534 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.535 18:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.470 nvme0n1 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.470 18:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.417 nvme0n1 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.417 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.352 nvme0n1 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.352 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.611 nvme0n1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.611 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.870 nvme0n1 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.870 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.129 nvme0n1 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.129 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.387 nvme0n1 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.387 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.388 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.646 nvme0n1 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.646 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.904 nvme0n1 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.904 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.905 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.163 nvme0n1 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.163 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.421 nvme0n1 00:35:54.421 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.422 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.680 nvme0n1 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.680 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.939 nvme0n1 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.939 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.197 nvme0n1 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.197 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.198 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.763 nvme0n1 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.763 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.022 nvme0n1 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.022 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.280 nvme0n1 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.280 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.539 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.797 nvme0n1 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.797 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.798 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.363 nvme0n1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.363 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.928 nvme0n1 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.928 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.494 nvme0n1 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.494 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.060 nvme0n1 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.060 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.318 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.319 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.884 nvme0n1 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.884 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg2ZjZkNjM5MGQ2MzYwODc0ZjM2ZWQyNGU1NGE0MDf0XYs0: 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: ]] 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2E5OTI0YjEzNTQxN2JiZTU5OTc5ZmJhMWNiZmNlMzc3NzJjZWRlZTI1NmQ4ZjQ4ZTc1ZDg4MzBmYmY0MjYxZbB8zqg=: 00:35:59.884 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.885 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 nvme0n1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.819 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.191 nvme0n1 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.191 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.192 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.125 nvme0n1 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0ZDNlNDZlOGVhMGY4MTFkZjcxNDg3YjljNWE2ZTdlOWNmMzFiODcxZmVmOWJmmJr5Yw==: 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ0MzY0NTcxNjYyZmVkYTc2YTkzZDI4NjcyNGI0NTGzBaWU: 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.125 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.060 nvme0n1 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVlMWU4NTU2YjUxMGQzYmVmYTk0ZjE3M2RiYjQzYjg0ODcwODE0NGM1NTZiNjdmODVhZDczNTI5MDhiNzFhZunXMEc=: 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.060 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.994 nvme0n1 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.994 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.994 request: 00:36:04.994 { 00:36:04.994 "name": "nvme0", 00:36:04.994 "trtype": "tcp", 00:36:05.253 "traddr": "10.0.0.1", 00:36:05.253 "adrfam": "ipv4", 00:36:05.253 "trsvcid": "4420", 00:36:05.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.253 "prchk_reftag": false, 00:36:05.253 "prchk_guard": false, 00:36:05.253 "hdgst": false, 00:36:05.253 "ddgst": false, 00:36:05.253 "allow_unrecognized_csi": false, 00:36:05.253 "method": "bdev_nvme_attach_controller", 00:36:05.253 "req_id": 1 00:36:05.253 } 00:36:05.253 Got JSON-RPC error response 00:36:05.253 response: 00:36:05.253 { 00:36:05.253 "code": -5, 00:36:05.253 "message": "Input/output error" 00:36:05.253 } 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.253 request: 00:36:05.253 { 00:36:05.253 "name": "nvme0", 00:36:05.253 "trtype": "tcp", 00:36:05.253 "traddr": "10.0.0.1", 00:36:05.253 "adrfam": "ipv4", 00:36:05.253 "trsvcid": "4420", 00:36:05.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.253 "prchk_reftag": false, 00:36:05.253 "prchk_guard": false, 00:36:05.253 "hdgst": false, 00:36:05.253 "ddgst": false, 00:36:05.253 "dhchap_key": "key2", 00:36:05.253 "allow_unrecognized_csi": false, 00:36:05.253 "method": "bdev_nvme_attach_controller", 00:36:05.253 "req_id": 1 00:36:05.253 } 00:36:05.253 Got JSON-RPC error response 00:36:05.253 response: 00:36:05.253 { 00:36:05.253 "code": -5, 00:36:05.253 "message": "Input/output error" 00:36:05.253 } 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.253 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.254 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.254 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.254 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.510 request: 00:36:05.510 { 00:36:05.510 "name": "nvme0", 00:36:05.510 "trtype": "tcp", 00:36:05.510 "traddr": "10.0.0.1", 00:36:05.510 "adrfam": "ipv4", 00:36:05.510 "trsvcid": "4420", 00:36:05.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.510 "prchk_reftag": false, 00:36:05.510 "prchk_guard": false, 00:36:05.510 "hdgst": false, 00:36:05.510 "ddgst": false, 00:36:05.510 "dhchap_key": "key1", 00:36:05.510 "dhchap_ctrlr_key": "ckey2", 00:36:05.510 "allow_unrecognized_csi": false, 00:36:05.510 "method": "bdev_nvme_attach_controller", 00:36:05.510 "req_id": 1 00:36:05.510 } 00:36:05.510 Got JSON-RPC error response 00:36:05.510 response: 00:36:05.510 { 00:36:05.510 "code": -5, 00:36:05.510 "message": "Input/output error" 00:36:05.510 } 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.510 nvme0n1 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:05.510 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:05.511 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.769 request: 00:36:05.769 { 00:36:05.769 "name": "nvme0", 00:36:05.769 "dhchap_key": "key1", 00:36:05.769 "dhchap_ctrlr_key": "ckey2", 00:36:05.769 "method": "bdev_nvme_set_keys", 00:36:05.769 "req_id": 1 00:36:05.769 } 00:36:05.769 Got JSON-RPC error response 00:36:05.769 response: 00:36:05.769 { 00:36:05.769 "code": -13, 00:36:05.769 "message": "Permission denied" 00:36:05.769 } 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:05.769 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:06.702 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.702 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.702 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.702 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:06.702 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.702 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:06.702 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ3YmI5MmEyMDEzZDM5MTBiM2YxOTk0YTU2YmFiYzJiYjY1NjIzOTEyZDVhMzNkHxlA0A==: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlZWJlZWMzMjU1MjNmOGJmNzQ2YWMyOWMyOTg3MTlkNWQyNjlmZjE5YzUwYTgx5SUtoQ==: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 nvme0n1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE2ZWUxMDcwNjRjZDNkZjUwMmFkMWZlY2NiNGFmNzEUPam3: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: ]] 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNiY2QxZTUyYTU5ODk0YzIyNTM0ODUxNDRjZTA3MjgXVatv: 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.076 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 request: 00:36:08.076 { 00:36:08.076 "name": "nvme0", 00:36:08.076 "dhchap_key": "key2", 00:36:08.076 "dhchap_ctrlr_key": "ckey1", 00:36:08.076 "method": "bdev_nvme_set_keys", 00:36:08.076 "req_id": 1 00:36:08.076 } 00:36:08.076 Got JSON-RPC error response 00:36:08.076 response: 00:36:08.076 { 00:36:08.076 "code": -13, 00:36:08.076 "message": "Permission denied" 00:36:08.076 } 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:08.077 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:09.062 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.437 rmmod nvme_tcp 00:36:10.437 rmmod nvme_fabrics 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3117754 ']' 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3117754 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3117754 ']' 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3117754 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3117754 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3117754' 00:36:10.437 killing process with pid 3117754 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3117754 00:36:10.437 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3117754 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.374 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:13.280 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.656 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.656 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.656 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:15.594 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:15.594 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lDi /tmp/spdk.key-null.Esb /tmp/spdk.key-sha256.81l /tmp/spdk.key-sha384.Bnv /tmp/spdk.key-sha512.JJK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:15.594 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.969 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:16.969 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:16.969 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:16.969 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:16.969 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:16.969 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:16.969 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:16.969 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:16.969 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:16.969 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:16.969 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:16.969 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:16.969 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:16.969 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:16.969 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:16.969 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:16.969 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:16.969 00:36:16.969 real 0m56.391s 00:36:16.969 user 0m54.180s 00:36:16.969 sys 0m6.215s 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.969 ************************************ 00:36:16.969 END TEST nvmf_auth_host 00:36:16.969 ************************************ 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.969 ************************************ 00:36:16.969 START TEST nvmf_digest 00:36:16.969 ************************************ 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.969 * Looking for test storage... 00:36:16.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:16.969 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:16.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.970 --rc genhtml_branch_coverage=1 00:36:16.970 --rc genhtml_function_coverage=1 00:36:16.970 --rc genhtml_legend=1 00:36:16.970 --rc geninfo_all_blocks=1 00:36:16.970 --rc geninfo_unexecuted_blocks=1 00:36:16.970 00:36:16.970 ' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:16.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.970 --rc genhtml_branch_coverage=1 00:36:16.970 --rc genhtml_function_coverage=1 00:36:16.970 --rc genhtml_legend=1 00:36:16.970 --rc geninfo_all_blocks=1 00:36:16.970 --rc geninfo_unexecuted_blocks=1 00:36:16.970 00:36:16.970 ' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:16.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.970 --rc genhtml_branch_coverage=1 00:36:16.970 --rc genhtml_function_coverage=1 00:36:16.970 --rc genhtml_legend=1 00:36:16.970 --rc geninfo_all_blocks=1 00:36:16.970 --rc geninfo_unexecuted_blocks=1 00:36:16.970 00:36:16.970 ' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:16.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.970 --rc genhtml_branch_coverage=1 00:36:16.970 --rc genhtml_function_coverage=1 00:36:16.970 --rc genhtml_legend=1 00:36:16.970 --rc geninfo_all_blocks=1 00:36:16.970 --rc geninfo_unexecuted_blocks=1 00:36:16.970 00:36:16.970 ' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.970 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.229 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:17.229 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:17.229 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:17.229 18:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:19.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:19.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:19.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:19.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.133 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:36:19.392 00:36:19.392 --- 10.0.0.2 ping statistics --- 00:36:19.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.392 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:36:19.392 00:36:19.392 --- 10.0.0.1 ping statistics --- 00:36:19.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.392 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.392 ************************************ 00:36:19.392 START TEST nvmf_digest_clean 00:36:19.392 ************************************ 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3128019 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3128019 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128019 ']' 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.392 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.392 [2024-11-18 18:43:17.627929] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:19.392 [2024-11-18 18:43:17.628073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.650 [2024-11-18 18:43:17.771166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.650 [2024-11-18 18:43:17.899543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.650 [2024-11-18 18:43:17.899646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.650 [2024-11-18 18:43:17.899674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.650 [2024-11-18 18:43:17.899700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.650 [2024-11-18 18:43:17.899722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.650 [2024-11-18 18:43:17.901326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.582 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.841 null0 00:36:20.841 [2024-11-18 18:43:19.019688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.841 [2024-11-18 18:43:19.044034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3128169 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3128169 /var/tmp/bperf.sock 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128169 ']' 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.841 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.841 [2024-11-18 18:43:19.140671] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:20.841 [2024-11-18 18:43:19.140828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128169 ] 00:36:21.098 [2024-11-18 18:43:19.297547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.355 [2024-11-18 18:43:19.438839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.920 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.920 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:21.920 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:21.920 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:21.920 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:22.486 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.486 18:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.052 nvme0n1 00:36:23.052 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.052 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.052 Running I/O for 2 seconds... 00:36:24.920 13234.00 IOPS, 51.70 MiB/s [2024-11-18T17:43:23.257Z] 13502.50 IOPS, 52.74 MiB/s 00:36:24.920 Latency(us) 00:36:24.920 [2024-11-18T17:43:23.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.920 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:24.920 nvme0n1 : 2.01 13525.63 52.83 0.00 0.00 9451.55 4660.34 21165.70 00:36:24.920 [2024-11-18T17:43:23.257Z] =================================================================================================================== 00:36:24.920 [2024-11-18T17:43:23.257Z] Total : 13525.63 52.83 0.00 0.00 9451.55 4660.34 21165.70 00:36:24.920 { 00:36:24.920 "results": [ 00:36:24.920 { 00:36:24.920 "job": "nvme0n1", 00:36:24.920 "core_mask": "0x2", 00:36:24.920 "workload": "randread", 00:36:24.920 "status": "finished", 00:36:24.920 "queue_depth": 128, 00:36:24.920 "io_size": 4096, 00:36:24.920 "runtime": 2.006043, 00:36:24.920 "iops": 13525.632301999509, 00:36:24.920 "mibps": 52.83450117968558, 00:36:24.920 "io_failed": 0, 00:36:24.920 "io_timeout": 0, 00:36:24.920 "avg_latency_us": 9451.553253152168, 00:36:24.920 "min_latency_us": 4660.337777777778, 00:36:24.920 "max_latency_us": 21165.70074074074 00:36:24.920 } 00:36:24.920 ], 00:36:24.920 "core_count": 1 00:36:24.920 } 00:36:24.920 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:24.920 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:24.920 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:24.921 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:24.921 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:24.921 | select(.opcode=="crc32c") 00:36:24.921 | "\(.module_name) \(.executed)"' 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3128169 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128169 ']' 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128169 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.178 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128169 00:36:25.437 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:25.437 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:25.437 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128169' 00:36:25.437 killing process with pid 3128169 00:36:25.437 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128169 00:36:25.437 Received shutdown signal, test time was about 2.000000 seconds 00:36:25.437 00:36:25.437 Latency(us) 00:36:25.437 [2024-11-18T17:43:23.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.437 [2024-11-18T17:43:23.774Z] =================================================================================================================== 00:36:25.437 [2024-11-18T17:43:23.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:25.437 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128169 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:26.370 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3128832 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3128832 /var/tmp/bperf.sock 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128832 ']' 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.371 18:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.371 [2024-11-18 18:43:24.581519] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:26.371 [2024-11-18 18:43:24.581685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128832 ] 00:36:26.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:26.371 Zero copy mechanism will not be used. 00:36:26.629 [2024-11-18 18:43:24.739945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.629 [2024-11-18 18:43:24.872674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.563 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.563 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:27.563 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.563 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.563 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:28.130 18:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.130 18:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.388 nvme0n1 00:36:28.388 18:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.388 18:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:28.646 Zero copy mechanism will not be used. 00:36:28.646 Running I/O for 2 seconds... 00:36:30.515 4812.00 IOPS, 601.50 MiB/s [2024-11-18T17:43:28.852Z] 4727.50 IOPS, 590.94 MiB/s 00:36:30.515 Latency(us) 00:36:30.515 [2024-11-18T17:43:28.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.515 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:30.515 nvme0n1 : 2.00 4729.99 591.25 0.00 0.00 3376.19 1013.38 7427.41 00:36:30.515 [2024-11-18T17:43:28.852Z] =================================================================================================================== 00:36:30.515 [2024-11-18T17:43:28.852Z] Total : 4729.99 591.25 0.00 0.00 3376.19 1013.38 7427.41 00:36:30.515 { 00:36:30.515 "results": [ 00:36:30.515 { 00:36:30.515 "job": "nvme0n1", 00:36:30.515 "core_mask": "0x2", 00:36:30.515 "workload": "randread", 00:36:30.515 "status": "finished", 00:36:30.515 "queue_depth": 16, 00:36:30.515 "io_size": 131072, 00:36:30.515 "runtime": 2.002331, 00:36:30.515 "iops": 4729.987199918495, 00:36:30.515 "mibps": 591.2483999898119, 00:36:30.515 "io_failed": 0, 00:36:30.515 "io_timeout": 0, 00:36:30.515 "avg_latency_us": 3376.1905131062854, 00:36:30.515 "min_latency_us": 1013.3807407407407, 00:36:30.515 "max_latency_us": 7427.413333333333 00:36:30.515 } 00:36:30.515 ], 00:36:30.515 "core_count": 1 00:36:30.515 } 00:36:30.515 18:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:30.515 18:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:30.515 18:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:30.516 18:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:30.516 18:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:30.516 | select(.opcode=="crc32c") 00:36:30.516 | "\(.module_name) \(.executed)"' 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3128832 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128832 ']' 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128832 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128832 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128832' 00:36:31.084 killing process with pid 3128832 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128832 00:36:31.084 Received shutdown signal, test time was about 2.000000 seconds 00:36:31.084 00:36:31.084 Latency(us) 00:36:31.084 [2024-11-18T17:43:29.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.084 [2024-11-18T17:43:29.421Z] =================================================================================================================== 00:36:31.084 [2024-11-18T17:43:29.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:31.084 18:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128832 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3129500 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3129500 /var/tmp/bperf.sock 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129500 ']' 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.017 18:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.017 [2024-11-18 18:43:30.157787] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:32.017 [2024-11-18 18:43:30.157930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129500 ] 00:36:32.017 [2024-11-18 18:43:30.304160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.275 [2024-11-18 18:43:30.445944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.210 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.210 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:33.210 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:33.210 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:33.210 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:33.776 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.776 18:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.034 nvme0n1 00:36:34.034 18:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:34.034 18:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.034 Running I/O for 2 seconds... 00:36:36.341 16146.00 IOPS, 63.07 MiB/s [2024-11-18T17:43:34.678Z] 16163.50 IOPS, 63.14 MiB/s 00:36:36.341 Latency(us) 00:36:36.341 [2024-11-18T17:43:34.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.341 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:36.341 nvme0n1 : 2.01 16144.17 63.06 0.00 0.00 7918.66 3713.71 13204.29 00:36:36.341 [2024-11-18T17:43:34.678Z] =================================================================================================================== 00:36:36.341 [2024-11-18T17:43:34.678Z] Total : 16144.17 63.06 0.00 0.00 7918.66 3713.71 13204.29 00:36:36.341 { 00:36:36.341 "results": [ 00:36:36.341 { 00:36:36.341 "job": "nvme0n1", 00:36:36.341 "core_mask": "0x2", 00:36:36.341 "workload": "randwrite", 00:36:36.341 "status": "finished", 00:36:36.341 "queue_depth": 128, 00:36:36.341 "io_size": 4096, 00:36:36.341 "runtime": 2.012429, 00:36:36.341 "iops": 16144.172042839773, 00:36:36.342 "mibps": 63.06317204234286, 00:36:36.342 "io_failed": 0, 00:36:36.342 "io_timeout": 0, 00:36:36.342 "avg_latency_us": 7918.664563436286, 00:36:36.342 "min_latency_us": 3713.7066666666665, 00:36:36.342 "max_latency_us": 13204.29037037037 00:36:36.342 } 00:36:36.342 ], 00:36:36.342 "core_count": 1 00:36:36.342 } 00:36:36.342 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:36.342 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:36.342 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:36.342 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:36.342 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:36.342 | select(.opcode=="crc32c") 00:36:36.342 | "\(.module_name) \(.executed)"' 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3129500 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129500 ']' 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129500 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129500 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129500' 00:36:36.600 killing process with pid 3129500 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129500 00:36:36.600 Received shutdown signal, test time was about 2.000000 seconds 00:36:36.600 00:36:36.600 Latency(us) 00:36:36.600 [2024-11-18T17:43:34.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.600 [2024-11-18T17:43:34.937Z] =================================================================================================================== 00:36:36.600 [2024-11-18T17:43:34.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.600 18:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129500 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130158 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130158 /var/tmp/bperf.sock 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130158 ']' 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.534 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:37.534 [2024-11-18 18:43:35.732209] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:37.534 [2024-11-18 18:43:35.732374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130158 ] 00:36:37.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:37.534 Zero copy mechanism will not be used. 00:36:37.792 [2024-11-18 18:43:35.889633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.792 [2024-11-18 18:43:36.029137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.359 18:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.359 18:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:38.359 18:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:38.359 18:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:38.359 18:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:39.293 18:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.293 18:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.551 nvme0n1 00:36:39.551 18:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:39.551 18:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:39.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.809 Zero copy mechanism will not be used. 00:36:39.809 Running I/O for 2 seconds... 00:36:41.677 4848.00 IOPS, 606.00 MiB/s [2024-11-18T17:43:40.014Z] 5035.50 IOPS, 629.44 MiB/s 00:36:41.677 Latency(us) 00:36:41.677 [2024-11-18T17:43:40.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:41.677 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:41.677 nvme0n1 : 2.00 5032.34 629.04 0.00 0.00 3165.69 2657.85 7718.68 00:36:41.677 [2024-11-18T17:43:40.014Z] =================================================================================================================== 00:36:41.677 [2024-11-18T17:43:40.014Z] Total : 5032.34 629.04 0.00 0.00 3165.69 2657.85 7718.68 00:36:41.677 { 00:36:41.677 "results": [ 00:36:41.677 { 00:36:41.677 "job": "nvme0n1", 00:36:41.677 "core_mask": "0x2", 00:36:41.677 "workload": "randwrite", 00:36:41.677 "status": "finished", 00:36:41.677 "queue_depth": 16, 00:36:41.677 "io_size": 131072, 00:36:41.677 "runtime": 2.004436, 00:36:41.677 "iops": 5032.338273708913, 00:36:41.677 "mibps": 629.0422842136142, 00:36:41.677 "io_failed": 0, 00:36:41.677 "io_timeout": 0, 00:36:41.677 "avg_latency_us": 3165.6855719683203, 00:36:41.677 "min_latency_us": 2657.8488888888887, 00:36:41.677 "max_latency_us": 7718.684444444444 00:36:41.677 } 00:36:41.677 ], 00:36:41.677 "core_count": 1 00:36:41.677 } 00:36:41.677 18:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:41.677 18:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:41.677 18:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:41.677 18:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:41.677 18:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:41.677 | select(.opcode=="crc32c") 00:36:41.677 | "\(.module_name) \(.executed)"' 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130158 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130158 ']' 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130158 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:41.960 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130158 00:36:42.260 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:42.260 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:42.260 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130158' 00:36:42.260 killing process with pid 3130158 00:36:42.260 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130158 00:36:42.260 Received shutdown signal, test time was about 2.000000 seconds 00:36:42.260 00:36:42.260 Latency(us) 00:36:42.260 [2024-11-18T17:43:40.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.260 [2024-11-18T17:43:40.597Z] =================================================================================================================== 00:36:42.260 [2024-11-18T17:43:40.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.260 18:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130158 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3128019 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128019 ']' 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128019 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128019 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128019' 00:36:43.195 killing process with pid 3128019 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128019 00:36:43.195 18:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128019 00:36:44.131 00:36:44.131 real 0m24.892s 00:36:44.131 user 0m48.152s 00:36:44.131 sys 0m4.987s 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:44.131 ************************************ 00:36:44.131 END TEST nvmf_digest_clean 00:36:44.131 ************************************ 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.131 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:44.389 ************************************ 00:36:44.389 START TEST nvmf_digest_error 00:36:44.389 ************************************ 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3130988 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3130988 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3130988 ']' 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.389 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.389 [2024-11-18 18:43:42.567790] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:44.389 [2024-11-18 18:43:42.567938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.389 [2024-11-18 18:43:42.707445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.647 [2024-11-18 18:43:42.836232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.647 [2024-11-18 18:43:42.836332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.647 [2024-11-18 18:43:42.836357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.647 [2024-11-18 18:43:42.836382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.647 [2024-11-18 18:43:42.836407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.647 [2024-11-18 18:43:42.838034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.213 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.213 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:45.213 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:45.213 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:45.213 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.471 [2024-11-18 18:43:43.568801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.471 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.730 null0 00:36:45.730 [2024-11-18 18:43:43.966733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.730 [2024-11-18 18:43:43.991099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3131150 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3131150 /var/tmp/bperf.sock 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3131150 ']' 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.730 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.988 [2024-11-18 18:43:44.081497] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:45.988 [2024-11-18 18:43:44.081649] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131150 ] 00:36:45.988 [2024-11-18 18:43:44.223227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.246 [2024-11-18 18:43:44.360463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.812 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.812 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:46.812 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.812 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.070 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.635 nvme0n1 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.635 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.635 Running I/O for 2 seconds... 00:36:47.635 [2024-11-18 18:43:45.849030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.635 [2024-11-18 18:43:45.849107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.635 [2024-11-18 18:43:45.849143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.635 [2024-11-18 18:43:45.867698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.635 [2024-11-18 18:43:45.867758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.635 [2024-11-18 18:43:45.867790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.635 [2024-11-18 18:43:45.888595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.635 [2024-11-18 18:43:45.888684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.635 [2024-11-18 18:43:45.888714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.635 [2024-11-18 18:43:45.903901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.636 [2024-11-18 18:43:45.903955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.636 [2024-11-18 18:43:45.903996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.636 [2024-11-18 18:43:45.922465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.636 [2024-11-18 18:43:45.922514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.636 [2024-11-18 18:43:45.922543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.636 [2024-11-18 18:43:45.940219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.636 [2024-11-18 18:43:45.940269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.636 [2024-11-18 18:43:45.940299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.636 [2024-11-18 18:43:45.959545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.636 [2024-11-18 18:43:45.959593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.636 [2024-11-18 18:43:45.959633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:45.976831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:45.976874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:45.976917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:45.992389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:45.992437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:45.992467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.011584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.011666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.011705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.029714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.029755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.029780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.046495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.046543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.064231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.064289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.064329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.081838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.081881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.081935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.099504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.099552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.099583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.116725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.116811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.137721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.137792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.152571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.152662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.170225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.170273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.188967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.189055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.205836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.894 [2024-11-18 18:43:46.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.894 [2024-11-18 18:43:46.205923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.894 [2024-11-18 18:43:46.224085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.895 [2024-11-18 18:43:46.224133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.895 [2024-11-18 18:43:46.224163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.240515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.240593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.260590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.260648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.277577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.277635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.277681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.293775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.293831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.293873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.311908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.311971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.312001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.329264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.329313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.329342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.347040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.347088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.347118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.364210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.364258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.364287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.381526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.381574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.381604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.398691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.398733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.153 [2024-11-18 18:43:46.398758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.153 [2024-11-18 18:43:46.417287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.153 [2024-11-18 18:43:46.417336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.154 [2024-11-18 18:43:46.417366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.154 [2024-11-18 18:43:46.434647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.154 [2024-11-18 18:43:46.434706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.154 [2024-11-18 18:43:46.434732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.154 [2024-11-18 18:43:46.451888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.154 [2024-11-18 18:43:46.451942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.154 [2024-11-18 18:43:46.451966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.154 [2024-11-18 18:43:46.469073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.154 [2024-11-18 18:43:46.469121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.154 [2024-11-18 18:43:46.469152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.154 [2024-11-18 18:43:46.486603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.154 [2024-11-18 18:43:46.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.154 [2024-11-18 18:43:46.486712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.412 [2024-11-18 18:43:46.504728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.412 [2024-11-18 18:43:46.504772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.504799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.521554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.521603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.521643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.539035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.539083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.539113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.557353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.557412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.557443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.576521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.576580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.576620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.592734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.592793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.592819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.613280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.613330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.613361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.631720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.631763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.649025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.649072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.662669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.662713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.662738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.683024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.683068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.683093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.703859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.703919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.703945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.726030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.726074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.726098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.413 [2024-11-18 18:43:46.742428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.413 [2024-11-18 18:43:46.742468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.413 [2024-11-18 18:43:46.742493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.758403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.758448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.758474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.775944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.776026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.776077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.790312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.790394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.809030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.809073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.809099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 14176.00 IOPS, 55.38 MiB/s [2024-11-18T17:43:47.009Z] [2024-11-18 18:43:46.830687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.830732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.830760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.849735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.849791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.849818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.867479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.867521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.867547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.883349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.883390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.883415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.899104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.899145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.899170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.918901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.918948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.918974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.935860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.935906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.935933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.953873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.953918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.953945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.970740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.970785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.970827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:46.987737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:46.987781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:46.987823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.672 [2024-11-18 18:43:47.004879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.672 [2024-11-18 18:43:47.004923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.672 [2024-11-18 18:43:47.004950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.022251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.022293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.038893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.038950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.038975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.055288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.055331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.055356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.071271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.071314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.071338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.089768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.089835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.089863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.105948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.106005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.106032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.123564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.123627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.123669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.145790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.145836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.145864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.160055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.160096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.160122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.178751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.178797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.178824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.199033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.199077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.199117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.216111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.216155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.216181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.231437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.231493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.231519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.931 [2024-11-18 18:43:47.248790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.931 [2024-11-18 18:43:47.248836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.931 [2024-11-18 18:43:47.248862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.271395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.271442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.271469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.290538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.290596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.290644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.308004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.308049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.308076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.326295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.326336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.326361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.341163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.341223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.341251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.359256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.359314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.359340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.374408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.374469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.391187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.391228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.391265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.408769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.408839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.425525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.425569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.425620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.444001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.444047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.459197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.459237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.190 [2024-11-18 18:43:47.459261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.190 [2024-11-18 18:43:47.477294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.190 [2024-11-18 18:43:47.477339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.191 [2024-11-18 18:43:47.477366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.191 [2024-11-18 18:43:47.498706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.191 [2024-11-18 18:43:47.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.191 [2024-11-18 18:43:47.498781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.191 [2024-11-18 18:43:47.519215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.191 [2024-11-18 18:43:47.519275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.191 [2024-11-18 18:43:47.519302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.537091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.537138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.551545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.551587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.551637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.569703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.569747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.569773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.588465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.588506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.588530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.607367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.607413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.607441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.625232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.625275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.625299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.642529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.642573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.642621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.662340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.662381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.449 [2024-11-18 18:43:47.678505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.449 [2024-11-18 18:43:47.678551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.449 [2024-11-18 18:43:47.678578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.693138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.693180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.693223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.711034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.711075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.711100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.729937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.729981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.743986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.744046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.744072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.763216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.763290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.450 [2024-11-18 18:43:47.777440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.450 [2024-11-18 18:43:47.777481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.450 [2024-11-18 18:43:47.777505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.708 [2024-11-18 18:43:47.797562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.708 [2024-11-18 18:43:47.797630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.708 [2024-11-18 18:43:47.797658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.708 [2024-11-18 18:43:47.816795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.708 [2024-11-18 18:43:47.816839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.708 [2024-11-18 18:43:47.816866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.708 14321.50 IOPS, 55.94 MiB/s 00:36:49.708 Latency(us) 00:36:49.708 [2024-11-18T17:43:48.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.708 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:49.708 nvme0n1 : 2.01 14350.79 56.06 0.00 0.00 8909.13 4077.80 26602.76 00:36:49.708 [2024-11-18T17:43:48.045Z] =================================================================================================================== 00:36:49.708 [2024-11-18T17:43:48.045Z] Total : 14350.79 56.06 0.00 0.00 8909.13 4077.80 26602.76 00:36:49.708 { 00:36:49.708 "results": [ 00:36:49.708 { 00:36:49.708 "job": "nvme0n1", 00:36:49.708 "core_mask": "0x2", 00:36:49.708 "workload": "randread", 00:36:49.708 "status": "finished", 00:36:49.708 "queue_depth": 128, 00:36:49.708 "io_size": 4096, 00:36:49.708 "runtime": 2.006022, 00:36:49.708 "iops": 14350.789771996519, 00:36:49.708 "mibps": 56.0577725468614, 00:36:49.708 "io_failed": 0, 00:36:49.708 "io_timeout": 0, 00:36:49.708 "avg_latency_us": 8909.127036676804, 00:36:49.708 "min_latency_us": 4077.7955555555554, 00:36:49.708 "max_latency_us": 26602.76148148148 00:36:49.708 } 00:36:49.708 ], 00:36:49.708 "core_count": 1 00:36:49.708 } 00:36:49.708 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.708 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.708 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.708 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.708 | .driver_specific 00:36:49.708 | .nvme_error 00:36:49.708 | .status_code 00:36:49.708 | .command_transient_transport_error' 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3131150 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3131150 ']' 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3131150 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131150 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131150' 00:36:49.967 killing process with pid 3131150 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3131150 00:36:49.967 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.967 00:36:49.967 Latency(us) 00:36:49.967 [2024-11-18T17:43:48.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.967 [2024-11-18T17:43:48.304Z] =================================================================================================================== 00:36:49.967 [2024-11-18T17:43:48.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.967 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3131150 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:50.899 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3131773 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3131773 /var/tmp/bperf.sock 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3131773 ']' 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.900 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.900 [2024-11-18 18:43:49.146944] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:50.900 [2024-11-18 18:43:49.147091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131773 ] 00:36:50.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:50.900 Zero copy mechanism will not be used. 00:36:51.158 [2024-11-18 18:43:49.290746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.158 [2024-11-18 18:43:49.425737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.091 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.657 nvme0n1 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.657 18:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.657 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.657 Zero copy mechanism will not be used. 00:36:52.657 Running I/O for 2 seconds... 00:36:52.657 [2024-11-18 18:43:50.981174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.657 [2024-11-18 18:43:50.981258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.657 [2024-11-18 18:43:50.981296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.657 [2024-11-18 18:43:50.988632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.657 [2024-11-18 18:43:50.988687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.657 [2024-11-18 18:43:50.988719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:50.995001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:50.995052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:50.995083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:50.999231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:50.999279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:50.999309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.006380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.006429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.014071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.014120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.014151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.019142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.019190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.019220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.024536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.024623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.030061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.030120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.030151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.034579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.034650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.916 [2024-11-18 18:43:51.034696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.916 [2024-11-18 18:43:51.039587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.916 [2024-11-18 18:43:51.039661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.039690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.044619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.044710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.049449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.049496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.049525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.054681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.054725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.054753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.060121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.060168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.060198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.064831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.064875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.064902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.070310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.070357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.070388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.077350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.077398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.077428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.084348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.084397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.084428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.093367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.093416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.093447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.101908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.101966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.102009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.110691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.110736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.110778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.119367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.119415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.119446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.128039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.128089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.128119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.135245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.135293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.135323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.142061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.142110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.142150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.149488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.149536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.149566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.156912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.156960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.156990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.164243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.164291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.164321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.171526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.171574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.171604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.178886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.178935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.186334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.186383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.186412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.193601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.193656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.193701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.200755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.200802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.200833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.207930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.207979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.208009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.215216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.917 [2024-11-18 18:43:51.215265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.917 [2024-11-18 18:43:51.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:52.917 [2024-11-18 18:43:51.222459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.918 [2024-11-18 18:43:51.222507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.918 [2024-11-18 18:43:51.222536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:52.918 [2024-11-18 18:43:51.229749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.918 [2024-11-18 18:43:51.229791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.918 [2024-11-18 18:43:51.229816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:52.918 [2024-11-18 18:43:51.236993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.918 [2024-11-18 18:43:51.237041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.918 [2024-11-18 18:43:51.237070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.918 [2024-11-18 18:43:51.244340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.918 [2024-11-18 18:43:51.244390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.918 [2024-11-18 18:43:51.244420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.251636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.251698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.251725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.259118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.259168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.266590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.266664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.266702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.273368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.273416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.273447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.278224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.278271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.278301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.284882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.284944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.284975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.291899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.291962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.291993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.297036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.297084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.297114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.302945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.302993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.303024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.309979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.310036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.310066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.316813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.316870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.316905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.324320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.324385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.324416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.332281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.332346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.332377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.340546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.340593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.340633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.348347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.348396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.348426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.356587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.356649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.363639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.363698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.363725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.370156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.370211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.370239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.376554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.376617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.376663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.382745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.382791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.382829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.388407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.388464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.388494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.392460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.392544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.397915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.397982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.398012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.402920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.402976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.403006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.409350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.409407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.418018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.418077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.418107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.427177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.427235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.427267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.436727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.436785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.446332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.446390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.446421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.455646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.455705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.455735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.465357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.465417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.465447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.474759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.474832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.484201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.484258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.484289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.493730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.179 [2024-11-18 18:43:51.493782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.179 [2024-11-18 18:43:51.493809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.179 [2024-11-18 18:43:51.503397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.180 [2024-11-18 18:43:51.503455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.180 [2024-11-18 18:43:51.503485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.180 [2024-11-18 18:43:51.513228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.180 [2024-11-18 18:43:51.513284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.180 [2024-11-18 18:43:51.513312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.522973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.523033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.531508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.531567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.531598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.538524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.538583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.538621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.544855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.544922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.544952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.551891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.551943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.551989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.559758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.559814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.559848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.566713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.566756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.566782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.573160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.573215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.573245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.579403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.579460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.579489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.586103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.586172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.586204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.592395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.592452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.592482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.598762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.598813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.605037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.605095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.605124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.611359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.611407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.438 [2024-11-18 18:43:51.611444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.438 [2024-11-18 18:43:51.617599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.438 [2024-11-18 18:43:51.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.623930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.623986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.624017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.630330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.630387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.630416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.636945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.637002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.637040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.643110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.643157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.643192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.650300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.650357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.650387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.658974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.659040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.659071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.666773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.666814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.666839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.673828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.673884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.673909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.680464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.680520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.680549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.687668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.687719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.687760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.696365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.696414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.696454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.704165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.704222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.704259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.711639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.711708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.711733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.718931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.718990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.719021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.726288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.726378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.733605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.733683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.733709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.740913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.740969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.741001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.747924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.747996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.748026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.754337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.754425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.439 [2024-11-18 18:43:51.760706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.439 [2024-11-18 18:43:51.760760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.439 [2024-11-18 18:43:51.760799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.440 [2024-11-18 18:43:51.767331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.440 [2024-11-18 18:43:51.767387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.440 [2024-11-18 18:43:51.767414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.774173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.774227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.774253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.781521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.781578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.790245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.790305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.799015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.799072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.799103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.807976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.808034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.808064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.816916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.816987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.817018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.824723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.831535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.831598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.831665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.839045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.839094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.839124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.846367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.846425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.846454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.853765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.853805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.853830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.861052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.861108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.861138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.868320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.868378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.868408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.875497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.875552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.882883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.882923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.882948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.890358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.890406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.890444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.897457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.897516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.905482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.905540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.905571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.699 [2024-11-18 18:43:51.914249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.699 [2024-11-18 18:43:51.914306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.699 [2024-11-18 18:43:51.914336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.923092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.923150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.923181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.931129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.931185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.931215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.938146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.938204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.938236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.945740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.952008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.952063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.952093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.958224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.958290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.958320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.964326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.964383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.964432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.970794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.970832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.970859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.700 4283.00 IOPS, 535.38 MiB/s [2024-11-18T17:43:52.037Z] [2024-11-18 18:43:51.979019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.979076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.985564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.985620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.985651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.991986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.992073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:51.998354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:51.998410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:51.998440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:52.004634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:52.004705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:52.004732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:52.010919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:52.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:52.011020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:52.017163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:52.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:52.017251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:52.023387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:52.023441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:52.023471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.700 [2024-11-18 18:43:52.029618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.700 [2024-11-18 18:43:52.029674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.700 [2024-11-18 18:43:52.029708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.959 [2024-11-18 18:43:52.035945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.959 [2024-11-18 18:43:52.036017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.959 [2024-11-18 18:43:52.036047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.959 [2024-11-18 18:43:52.042400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.959 [2024-11-18 18:43:52.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.959 [2024-11-18 18:43:52.042486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.959 [2024-11-18 18:43:52.048931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.959 [2024-11-18 18:43:52.048996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.959 [2024-11-18 18:43:52.049027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.959 [2024-11-18 18:43:52.055283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.959 [2024-11-18 18:43:52.055339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.959 [2024-11-18 18:43:52.055369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.061852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.061905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.061930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.068194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.068258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.068288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.075018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.075074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.075103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.082079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.082127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.082167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.089822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.089881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.089912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.098410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.098469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.098500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.106881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.106951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.106982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.114650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.114709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.114740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.122150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.122210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.122240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.128916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.128974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.129005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.135238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.135295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.141379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.141436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.141467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.146041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.146097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.146128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.152532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.152590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.159939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.159996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.160027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.168761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.168820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.177527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.177587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.177626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.185018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.185077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.185107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.191920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.192031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.199085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.199132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.199161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.207604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.207668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.207699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.216536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.216584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.216646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.225140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.225199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.225229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.232821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.232875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.232918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.239926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.239983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.240013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.246616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.246672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.960 [2024-11-18 18:43:52.246702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.960 [2024-11-18 18:43:52.252872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.960 [2024-11-18 18:43:52.252936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.252980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.259636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.259702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.259726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.266654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.266731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.266757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.273344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.273400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.273431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.279690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.279757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.279783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.286627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.286683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.286714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.961 [2024-11-18 18:43:52.293359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:53.961 [2024-11-18 18:43:52.293413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.961 [2024-11-18 18:43:52.293440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.299784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.299828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.299865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.304181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.304237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.304267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.311551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.311618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.318183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.318242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.318272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.324966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.325022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.325052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.331589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.331652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.331683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.338209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.338266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.338296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.345029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.345088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.345119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.351285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.351369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.356138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.356191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.220 [2024-11-18 18:43:52.356221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.220 [2024-11-18 18:43:52.362424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.220 [2024-11-18 18:43:52.362481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.362512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.370077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.370136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.370166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.376784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.376827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.376863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.383362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.383411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.383443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.389582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.389646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.389691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.396791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.396837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.396866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.404143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.404203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.404234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.410754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.410797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.410825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.417319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.417374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.417404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.423723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.423766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.423802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.430194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.430278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.436712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.436755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.443171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.443228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.443258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.449548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.449594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.449659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.456109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.462479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.462536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.462565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.468997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.469044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.469079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.475491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.475548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.475577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.481939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.481999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.482035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.488368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.488422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.488452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.494786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.494828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.494856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.501002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.501049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.501078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.507409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.507455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.507485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.513777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.513819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.513845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.520203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.520250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.520280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.526695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.526737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.526764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.532969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.221 [2024-11-18 18:43:52.533016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.221 [2024-11-18 18:43:52.533056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.221 [2024-11-18 18:43:52.539225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.222 [2024-11-18 18:43:52.539273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.222 [2024-11-18 18:43:52.539303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.222 [2024-11-18 18:43:52.545614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.222 [2024-11-18 18:43:52.545673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.222 [2024-11-18 18:43:52.545714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.222 [2024-11-18 18:43:52.552236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.222 [2024-11-18 18:43:52.552279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.222 [2024-11-18 18:43:52.552326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.559238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.559286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.559331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.565685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.565742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.565768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.572473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.572520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.572549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.579293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.579341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.579372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.585600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.585674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.591826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.591869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.591897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.598019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.598067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.598097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.604331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.604377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.604406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.610745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.610786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.610812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.617029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.617076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.617105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.623352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.623398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.623428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.481 [2024-11-18 18:43:52.629730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.481 [2024-11-18 18:43:52.629773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.481 [2024-11-18 18:43:52.629799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.635944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.635991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.636021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.642957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.643004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.643045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.651288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.651337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.651367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.659548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.659598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.659640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.666676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.666720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.666763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.673772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.673816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.673844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.681098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.681180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.688334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.688401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.688431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.695568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.695625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.702820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.702863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.702906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.709954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.710017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.717208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.717256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.717285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.724490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.724568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.731621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.731682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.731708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.738979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.739027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.739058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.746160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.746208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.746237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.753403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.753453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.753485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.759879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.759938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.765931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.765976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.766012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.772184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.772232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.772261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.778484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.778531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.778561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.784767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.784809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.791086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.791162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.797285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.797333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.797363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.803603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.803672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.803715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.809907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.809964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.482 [2024-11-18 18:43:52.809995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.482 [2024-11-18 18:43:52.816412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.482 [2024-11-18 18:43:52.816456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.816483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.822708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.822761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.822788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.828817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.828860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.828907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.835483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.835531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.835562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.841867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.841927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.841954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.848172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.848220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.848250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.855203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.855250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.862185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.862234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.862264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.869889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.869948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.874558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.874617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.874675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.879899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.879959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.879990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.887075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.887124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.894198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.894279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.901494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.901543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.901573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.908836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.908886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.908914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.916123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.916170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.916197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.923546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.923616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.923648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.931401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.931447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.931476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.938675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.938737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.938781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.945819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.945864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.945919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.952456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.952502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.952530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.959652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.959698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.959740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.966663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.966725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.966753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:54.741 [2024-11-18 18:43:52.973913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:54.741 [2024-11-18 18:43:52.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.741 [2024-11-18 18:43:52.974020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:54.741 4439.50 IOPS, 554.94 MiB/s 00:36:54.741 Latency(us) 00:36:54.741 [2024-11-18T17:43:53.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.741 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:54.741 nvme0n1 : 2.00 4440.03 555.00 0.00 0.00 3597.12 976.97 14272.28 00:36:54.741 [2024-11-18T17:43:53.078Z] =================================================================================================================== 00:36:54.741 [2024-11-18T17:43:53.078Z] Total : 4440.03 555.00 0.00 0.00 3597.12 976.97 14272.28 00:36:54.741 { 00:36:54.741 "results": [ 00:36:54.741 { 00:36:54.741 "job": "nvme0n1", 00:36:54.741 "core_mask": "0x2", 00:36:54.741 "workload": "randread", 00:36:54.741 "status": "finished", 00:36:54.741 "queue_depth": 16, 00:36:54.741 "io_size": 131072, 00:36:54.741 "runtime": 2.003366, 00:36:54.741 "iops": 4440.027433828866, 00:36:54.741 "mibps": 555.0034292286083, 00:36:54.741 "io_failed": 0, 00:36:54.741 "io_timeout": 0, 00:36:54.741 "avg_latency_us": 3597.1177228988404, 00:36:54.741 "min_latency_us": 976.9718518518519, 00:36:54.741 "max_latency_us": 14272.284444444444 00:36:54.741 } 00:36:54.741 ], 00:36:54.741 "core_count": 1 00:36:54.741 } 00:36:54.741 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.741 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.741 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.741 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.741 | .driver_specific 00:36:54.741 | .nvme_error 00:36:54.741 | .status_code 00:36:54.741 | .command_transient_transport_error' 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 287 > 0 )) 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3131773 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3131773 ']' 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3131773 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131773 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131773' 00:36:54.999 killing process with pid 3131773 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3131773 00:36:54.999 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.999 00:36:54.999 Latency(us) 00:36:54.999 [2024-11-18T17:43:53.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.999 [2024-11-18T17:43:53.336Z] =================================================================================================================== 00:36:54.999 [2024-11-18T17:43:53.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.999 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3131773 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132354 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132354 /var/tmp/bperf.sock 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132354 ']' 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:55.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:55.933 18:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:56.192 [2024-11-18 18:43:54.276486] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:56.192 [2024-11-18 18:43:54.276636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132354 ] 00:36:56.192 [2024-11-18 18:43:54.417655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.450 [2024-11-18 18:43:54.557481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.015 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.015 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:57.015 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:57.015 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.273 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.838 nvme0n1 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:57.838 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:58.096 Running I/O for 2 seconds... 00:36:58.096 [2024-11-18 18:43:56.196723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.197083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.197140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.215193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.215512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.215558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.233227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.233550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.233627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.251419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.251762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.251845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.269778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.270196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.287906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.288250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.288317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.305946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.306294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.306339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.323799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.324198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.341476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.341812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.341881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.359047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.359364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.359429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.376541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.376870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.376960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.394044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.394363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.394428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.411490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.411825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.411893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.096 [2024-11-18 18:43:56.429038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.096 [2024-11-18 18:43:56.429356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.096 [2024-11-18 18:43:56.429422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.354 [2024-11-18 18:43:56.446665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.354 [2024-11-18 18:43:56.446983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.354 [2024-11-18 18:43:56.447063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.354 [2024-11-18 18:43:56.464238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.354 [2024-11-18 18:43:56.464556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.354 [2024-11-18 18:43:56.464630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.354 [2024-11-18 18:43:56.481666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.482027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.499167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.499486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.499552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.516647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.516978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.517021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.534152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.534475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.534542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.551655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.551970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.569071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.569418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.569462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.586454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.586784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.586851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.604085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.604430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.604474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.621522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.621848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.621913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.638948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.639283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.639327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.656328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.656723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.355 [2024-11-18 18:43:56.673732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.355 [2024-11-18 18:43:56.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.355 [2024-11-18 18:43:56.674120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.691103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.691420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.691487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.708624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.709003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.726115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.726434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.726500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.743807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.744125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.744191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.761292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.761605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.761681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.778781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.779116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.779160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.796193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.796582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.813601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.813949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.831108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.831440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.848557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.848886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.865968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.866286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.866354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.883330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.883654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.883722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.900796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.901112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.901156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.918172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.918491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.918557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.614 [2024-11-18 18:43:56.935654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.614 [2024-11-18 18:43:56.935973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.614 [2024-11-18 18:43:56.936038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:56.953092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:56.953416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:56.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:56.970815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:56.971139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:56.971183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:56.988789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:56.989107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:56.989173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.006599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.006926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:57.006992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.024358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.024691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:57.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.042122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.042442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:57.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.059957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.060278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:57.060346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.077593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.077926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.873 [2024-11-18 18:43:57.077994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.873 [2024-11-18 18:43:57.095130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.873 [2024-11-18 18:43:57.095447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.095515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.112551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 [2024-11-18 18:43:57.112877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.112945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.129950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 [2024-11-18 18:43:57.130268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.130344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.147344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 [2024-11-18 18:43:57.147669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.164676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 [2024-11-18 18:43:57.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.165037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.182098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 14466.00 IOPS, 56.51 MiB/s [2024-11-18T17:43:57.211Z] [2024-11-18 18:43:57.182910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.182974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.874 [2024-11-18 18:43:57.199504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:58.874 [2024-11-18 18:43:57.199834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.874 [2024-11-18 18:43:57.199900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.216905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.217224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.217290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.234303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.234670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.251778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.252096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.252162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.269175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.269490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.286678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.286996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.287061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.304096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.304414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.304458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.321680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.322003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.322070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.339069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.339387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.339432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.356492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.132 [2024-11-18 18:43:57.356837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.132 [2024-11-18 18:43:57.356881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.132 [2024-11-18 18:43:57.373858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.374175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.374240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.133 [2024-11-18 18:43:57.391245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.391558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.391634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.133 [2024-11-18 18:43:57.408620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.408940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.409007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.133 [2024-11-18 18:43:57.426040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.426374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.426425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.133 [2024-11-18 18:43:57.443361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.443684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.133 [2024-11-18 18:43:57.460767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.133 [2024-11-18 18:43:57.461100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.133 [2024-11-18 18:43:57.461145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.478284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.478605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.478684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.495818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.496155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.496199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.513455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.513785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.513864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.530888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.531202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.531267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.548400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.548737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.548781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.566112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.566429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.566511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.583711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.584098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.601328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.601670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.601738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.619111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.619430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.619475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.636544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.636871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.636936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.653967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.391 [2024-11-18 18:43:57.654287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.391 [2024-11-18 18:43:57.654353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.391 [2024-11-18 18:43:57.671409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.392 [2024-11-18 18:43:57.671736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.392 [2024-11-18 18:43:57.671802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.392 [2024-11-18 18:43:57.688801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.392 [2024-11-18 18:43:57.689120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.392 [2024-11-18 18:43:57.689164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.392 [2024-11-18 18:43:57.706270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.392 [2024-11-18 18:43:57.706587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.392 [2024-11-18 18:43:57.706663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.392 [2024-11-18 18:43:57.723728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.392 [2024-11-18 18:43:57.724045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.392 [2024-11-18 18:43:57.724095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.741261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.741668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.758849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.759165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.759229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.776330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.776651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.776716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.793833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.794146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.794211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.811275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.811591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.811665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.828730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.829047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.829111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.846455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.846793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.846871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.864219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.864532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.650 [2024-11-18 18:43:57.864599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.650 [2024-11-18 18:43:57.881825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.650 [2024-11-18 18:43:57.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.882211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.651 [2024-11-18 18:43:57.899296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.651 [2024-11-18 18:43:57.899619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.899684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.651 [2024-11-18 18:43:57.916751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.651 [2024-11-18 18:43:57.917066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.917106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.651 [2024-11-18 18:43:57.934234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.651 [2024-11-18 18:43:57.934545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.934620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.651 [2024-11-18 18:43:57.951694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.651 [2024-11-18 18:43:57.952012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.952054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.651 [2024-11-18 18:43:57.969144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.651 [2024-11-18 18:43:57.969479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.651 [2024-11-18 18:43:57.969542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:57.986874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.909 [2024-11-18 18:43:57.987202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.909 [2024-11-18 18:43:57.987244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:58.004688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.909 [2024-11-18 18:43:58.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.909 [2024-11-18 18:43:58.005075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:58.022349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.909 [2024-11-18 18:43:58.022674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.909 [2024-11-18 18:43:58.022716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:58.039904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.909 [2024-11-18 18:43:58.040220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.909 [2024-11-18 18:43:58.040283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:58.057681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.909 [2024-11-18 18:43:58.058001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.909 [2024-11-18 18:43:58.058066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.909 [2024-11-18 18:43:58.075809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.076143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.076206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 [2024-11-18 18:43:58.094047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.094374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 [2024-11-18 18:43:58.112229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.112554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.112636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 [2024-11-18 18:43:58.130431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.130771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.130849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 [2024-11-18 18:43:58.148798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 [2024-11-18 18:43:58.167182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.167506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.167570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 14492.50 IOPS, 56.61 MiB/s [2024-11-18T17:43:58.247Z] [2024-11-18 18:43:58.185653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:36:59.910 [2024-11-18 18:43:58.185984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.910 [2024-11-18 18:43:58.186062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:59.910 00:36:59.910 Latency(us) 00:36:59.910 [2024-11-18T17:43:58.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:59.910 nvme0n1 : 2.01 14492.96 56.61 0.00 0.00 8806.00 3543.80 18932.62 00:36:59.910 [2024-11-18T17:43:58.247Z] =================================================================================================================== 00:36:59.910 [2024-11-18T17:43:58.247Z] Total : 14492.96 56.61 0.00 0.00 8806.00 3543.80 18932.62 00:36:59.910 { 00:36:59.910 "results": [ 00:36:59.910 { 00:36:59.910 "job": "nvme0n1", 00:36:59.910 "core_mask": "0x2", 00:36:59.910 "workload": "randwrite", 00:36:59.910 "status": "finished", 00:36:59.910 "queue_depth": 128, 00:36:59.910 "io_size": 4096, 00:36:59.910 "runtime": 2.008768, 00:36:59.910 "iops": 14492.962850861823, 00:36:59.910 "mibps": 56.613136136178994, 00:36:59.910 "io_failed": 0, 00:36:59.910 "io_timeout": 0, 00:36:59.910 "avg_latency_us": 8806.002880525564, 00:36:59.910 "min_latency_us": 3543.7985185185184, 00:36:59.910 "max_latency_us": 18932.62222222222 00:36:59.910 } 00:36:59.910 ], 00:36:59.910 "core_count": 1 00:36:59.910 } 00:36:59.910 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:59.910 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:59.910 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:59.910 | .driver_specific 00:36:59.910 | .nvme_error 00:36:59.910 | .status_code 00:36:59.910 | .command_transient_transport_error' 00:36:59.910 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132354 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132354 ']' 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132354 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.168 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132354 00:37:00.427 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:00.427 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:00.427 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132354' 00:37:00.427 killing process with pid 3132354 00:37:00.427 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132354 00:37:00.427 Received shutdown signal, test time was about 2.000000 seconds 00:37:00.427 00:37:00.427 Latency(us) 00:37:00.427 [2024-11-18T17:43:58.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.427 [2024-11-18T17:43:58.764Z] =================================================================================================================== 00:37:00.427 [2024-11-18T17:43:58.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:00.427 18:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132354 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133013 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133013 /var/tmp/bperf.sock 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133013 ']' 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:01.361 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:01.361 [2024-11-18 18:43:59.519614] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:01.361 [2024-11-18 18:43:59.519765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133013 ] 00:37:01.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:01.361 Zero copy mechanism will not be used. 00:37:01.361 [2024-11-18 18:43:59.664124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.619 [2024-11-18 18:43:59.804031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:02.553 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:03.120 nvme0n1 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:03.120 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:03.379 Zero copy mechanism will not be used. 00:37:03.379 Running I/O for 2 seconds... 00:37:03.379 [2024-11-18 18:44:01.465287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.465495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.473767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.473946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.474017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.481323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.481450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.481509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.488800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.488980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.489028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.496219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.496357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.496415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.504207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.504451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.504504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.512523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.512758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.512800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.519853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.520071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.520123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.527165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.527337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.527397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.534479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.534739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.541662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.541899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.548803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.548972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.549031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.556050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.556229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.556274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.564339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.572780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.572989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.573064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.579840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.580010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.587118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.587273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.587327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.595152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.595268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.595311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.603155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.603453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.603516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.611766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.611990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.612030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.620297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.620567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.379 [2024-11-18 18:44:01.620630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.379 [2024-11-18 18:44:01.629026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.379 [2024-11-18 18:44:01.629264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.629321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.636376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.636518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.643700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.643862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.643937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.651161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.651318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.651370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.658736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.658852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.658917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.666901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.667151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.667199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.674759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.674921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.682137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.682304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.682352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.689350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.689588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.689665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.696418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.696680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.696724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.703517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.703725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.703775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.380 [2024-11-18 18:44:01.710635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.380 [2024-11-18 18:44:01.710849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.380 [2024-11-18 18:44:01.710901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.638 [2024-11-18 18:44:01.717780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.638 [2024-11-18 18:44:01.717971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.638 [2024-11-18 18:44:01.718013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.638 [2024-11-18 18:44:01.725134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.638 [2024-11-18 18:44:01.725367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.638 [2024-11-18 18:44:01.725415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.638 [2024-11-18 18:44:01.732300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.638 [2024-11-18 18:44:01.732554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.638 [2024-11-18 18:44:01.732603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.638 [2024-11-18 18:44:01.739656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.739881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.739947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.746812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.747049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.747101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.753831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.754101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.754158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.761107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.761322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.761371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.768217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.768545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.775374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.775536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.775588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.782484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.782738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.782782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.789564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.789805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.789849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.797049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.797258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.797308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.804199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.804373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.804430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.811309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.811573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.811634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.818329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.818579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.818636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.825372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.825625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.832339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.832592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.832656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.839521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.839766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.839817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.847052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.847329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.847381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.855364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.855489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.855542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.862933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.863182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.863232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.870723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.870993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.877855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.878001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.878056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.884726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.885014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.891651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.891897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.891945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.898619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.898878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.898929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.905569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.905891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.912578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.912820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.912868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.919619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.639 [2024-11-18 18:44:01.919846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.639 [2024-11-18 18:44:01.919896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.639 [2024-11-18 18:44:01.926597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.926853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.926900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.933548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.933770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.933827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.940524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.940776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.940825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.947464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.947718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.947767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.954536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.954807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.961665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.961891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.961942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.640 [2024-11-18 18:44:01.968588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.640 [2024-11-18 18:44:01.968838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.640 [2024-11-18 18:44:01.968883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:01.975794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:01.976048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:01.976099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:01.982803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:01.983072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:01.983125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:01.989836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:01.990075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:01.990125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:01.997078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:01.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:01.997379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.004138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.004297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.004347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.011118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.011297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.011354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.018098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.018294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.018362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.025117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.025319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.025382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.032162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.032418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.032471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.039293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.039551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.039633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.046507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.046744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.046795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.053779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.054014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.054060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.061065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.061266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.061309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.068284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.068553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.068603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.076070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.076291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.076335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.084202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.084382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.092047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.092300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.092353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.099235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.099519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.106143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.106344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.899 [2024-11-18 18:44:02.113157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.899 [2024-11-18 18:44:02.113411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.899 [2024-11-18 18:44:02.113457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.121199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.121431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.121479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.129309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.129556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.136222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.136346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.136395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.143303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.143475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.143541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.150855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.150970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.151023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.158388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.158661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.158721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.165932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.166203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.166253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.172984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.173258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.173307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.180064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.180213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.180270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.187857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.188087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.188135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.195825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.196067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.196114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.203597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.203840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.203889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.210545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.210789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.210842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.217601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.217763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.217821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.224678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.224850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.224908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.900 [2024-11-18 18:44:02.231691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:03.900 [2024-11-18 18:44:02.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.900 [2024-11-18 18:44:02.231935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.159 [2024-11-18 18:44:02.238833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.159 [2024-11-18 18:44:02.239007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.159 [2024-11-18 18:44:02.239063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.159 [2024-11-18 18:44:02.245839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.159 [2024-11-18 18:44:02.246059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.159 [2024-11-18 18:44:02.246113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.159 [2024-11-18 18:44:02.252766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.159 [2024-11-18 18:44:02.253003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.159 [2024-11-18 18:44:02.253050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.159 [2024-11-18 18:44:02.259879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.159 [2024-11-18 18:44:02.260137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.159 [2024-11-18 18:44:02.260201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.266992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.267187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.267245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.274226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.274483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.274533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.281311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.281562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.281618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.288299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.288483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.288544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.295210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.295456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.295505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.302285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.302538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.302589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.309332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.309614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.309667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.316284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.316451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.316496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.323443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.323722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.323778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.330504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.337549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.337789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.337835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.344642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.344886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.344936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.351695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.351999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.358922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.359177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.359224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.367219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.367358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.367415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.374260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.374448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.381307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.381469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.381525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.388332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.388455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.388509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.396507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.396766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.396814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.404616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.404860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.404907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.411620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.411819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.411880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.418596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.418767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.418816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.425700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.425887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.425938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.433647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.433790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.433847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.440700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.440888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.447619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.160 [2024-11-18 18:44:02.447787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.160 [2024-11-18 18:44:02.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.160 [2024-11-18 18:44:02.454662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.454923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.454980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.161 4209.00 IOPS, 526.12 MiB/s [2024-11-18T17:44:02.498Z] [2024-11-18 18:44:02.463182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.463431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.463486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.161 [2024-11-18 18:44:02.470232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.470473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.470522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.161 [2024-11-18 18:44:02.477203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.477443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.477491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.161 [2024-11-18 18:44:02.484319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.484568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.484632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.161 [2024-11-18 18:44:02.491324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.161 [2024-11-18 18:44:02.491561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.161 [2024-11-18 18:44:02.491619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.499197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.499352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.499395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.507428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.507627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.507700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.514372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.514556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.514618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.521308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.521522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.528794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.528973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.536312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.536548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.543910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.544149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.544196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.550998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.551275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.551325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.558188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.558365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.558425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.565331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.565591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.565663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.572648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.572925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.580009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.580261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.580326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.420 [2024-11-18 18:44:02.587253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.420 [2024-11-18 18:44:02.587502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.420 [2024-11-18 18:44:02.587551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.594305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.594555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.594602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.601561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.601750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.601806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.608806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.609050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.609097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.616101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.616339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.623201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.623412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.623471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.630399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.630669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.637533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.637768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.637817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.644792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.645050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.645104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.652076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.652300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.652346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.659940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.660214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.660264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.667972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.668188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.668236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.676868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.677003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.677074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.685602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.685820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.685865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.694765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.695000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.695052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.703852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.704101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.713129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.713331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.713377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.722124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.722404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.731250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.731534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.731585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.739459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.739710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.746579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.746727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.746784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.421 [2024-11-18 18:44:02.753680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.421 [2024-11-18 18:44:02.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.421 [2024-11-18 18:44:02.753915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.760757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.760937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.767941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.768179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.768234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.775105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.775356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.775404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.782475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.782745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.782820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.789848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.790015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.790074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.796976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.797211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.797263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.804095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.804342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.804393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.811265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.811427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.811484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.818435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.818711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.818765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.825486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.825645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.825692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.832672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.832903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.832956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.839727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.839997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.840050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.846728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.846990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.847044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.853864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.854060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.854111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.861005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.861254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.861303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.868269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.868511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.868560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.875378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.875648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.882509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.882757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.882804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.889593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.889851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.889898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.896796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.897046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.897099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.904084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.904319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.904390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.911298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.911489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.911550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.918485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.918733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.918782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.925592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.681 [2024-11-18 18:44:02.925820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.681 [2024-11-18 18:44:02.925867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.681 [2024-11-18 18:44:02.932671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.932905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.932954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.939689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.939941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.939989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.946938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.947180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.947229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.954042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.954313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.954365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.961072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.961306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.961354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.968065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.968348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.975277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.975521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.975570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.982551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.982764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.982828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.989647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.989867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.989913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:02.996669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:02.996922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:02.996973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:03.003712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:03.003962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:03.004013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.682 [2024-11-18 18:44:03.010933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.682 [2024-11-18 18:44:03.011178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.682 [2024-11-18 18:44:03.011225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.018094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.018334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.018385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.025272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.025519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.025585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.032316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.032561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.032629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.039507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.039694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.039753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.046712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.047038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.053841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.054053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.054103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.061165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.061447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.941 [2024-11-18 18:44:03.068395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.941 [2024-11-18 18:44:03.068657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.941 [2024-11-18 18:44:03.068713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.075690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.075901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.075947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.083375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.083533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.083576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.092002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.092226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.092285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.100866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.101075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.101118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.108539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.108726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.108781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.115630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.115818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.122704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.122899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.122957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.129829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.129966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.130026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.137694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.137825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.137881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.144978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.145116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.145174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.152108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.152314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.159229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.159422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.167005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.167144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.174122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.174296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.174347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.181290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.181416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.181472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.188819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.188963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.189016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.196054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.196234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.204035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.204150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.204209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.211876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.212030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.212083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.219146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.219311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.226297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.226464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.226513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.233513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.233674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.233739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.241047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.241234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.248452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.248598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.248677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.255571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.255748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.255799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.262668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.262790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.262848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:04.942 [2024-11-18 18:44:03.270559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:04.942 [2024-11-18 18:44:03.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.942 [2024-11-18 18:44:03.270787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.201 [2024-11-18 18:44:03.279217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.279371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.279418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.287518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.287746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.287792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.296158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.296406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.296453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.303694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.303874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.303938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.311771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.311978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.312024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.318986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.319149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.319200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.326206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.326386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.326437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.333326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.333590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.333652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.340695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.340914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.347951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.348216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.348285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.355100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.355345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.355397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.362329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.362623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.369523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.369783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.376589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.376861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.376913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.383774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.384011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.384059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.390842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.391090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.391138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.397970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.398226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.398276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.405006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.405281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.405328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.412501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.412731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.412776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.420873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.421201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.428477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.428714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.428777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.435552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.435731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.435787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.442739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.442861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.442919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.450388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.450569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.450635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:05.202 [2024-11-18 18:44:03.458282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.458541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.458626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:05.202 4193.00 IOPS, 524.12 MiB/s [2024-11-18T17:44:03.539Z] [2024-11-18 18:44:03.466245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.202 [2024-11-18 18:44:03.466451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.202 [2024-11-18 18:44:03.466495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:05.202 00:37:05.202 Latency(us) 00:37:05.202 [2024-11-18T17:44:03.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:05.203 nvme0n1 : 2.00 4190.57 523.82 0.00 0.00 3804.67 3058.35 12718.84 00:37:05.203 [2024-11-18T17:44:03.540Z] =================================================================================================================== 00:37:05.203 [2024-11-18T17:44:03.540Z] Total : 4190.57 523.82 0.00 0.00 3804.67 3058.35 12718.84 00:37:05.203 { 00:37:05.203 "results": [ 00:37:05.203 { 00:37:05.203 "job": "nvme0n1", 00:37:05.203 "core_mask": "0x2", 00:37:05.203 "workload": "randwrite", 00:37:05.203 "status": "finished", 00:37:05.203 "queue_depth": 16, 00:37:05.203 "io_size": 131072, 00:37:05.203 "runtime": 2.004979, 00:37:05.203 "iops": 4190.5675820046, 00:37:05.203 "mibps": 523.820947750575, 00:37:05.203 "io_failed": 0, 00:37:05.203 "io_timeout": 0, 00:37:05.203 "avg_latency_us": 3804.671832985092, 00:37:05.203 "min_latency_us": 3058.346666666667, 00:37:05.203 "max_latency_us": 12718.838518518518 00:37:05.203 } 00:37:05.203 ], 00:37:05.203 "core_count": 1 00:37:05.203 } 00:37:05.203 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:05.203 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:05.203 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:05.203 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:05.203 | .driver_specific 00:37:05.203 | .nvme_error 00:37:05.203 | .status_code 00:37:05.203 | .command_transient_transport_error' 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133013 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133013 ']' 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133013 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:05.461 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133013 00:37:05.719 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:05.719 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:05.719 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133013' 00:37:05.719 killing process with pid 3133013 00:37:05.719 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133013 00:37:05.719 Received shutdown signal, test time was about 2.000000 seconds 00:37:05.719 00:37:05.719 Latency(us) 00:37:05.719 [2024-11-18T17:44:04.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.719 [2024-11-18T17:44:04.056Z] =================================================================================================================== 00:37:05.719 [2024-11-18T17:44:04.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:05.719 18:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133013 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3130988 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3130988 ']' 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3130988 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130988 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130988' 00:37:06.653 killing process with pid 3130988 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3130988 00:37:06.653 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3130988 00:37:08.029 00:37:08.029 real 0m23.482s 00:37:08.029 user 0m46.018s 00:37:08.029 sys 0m4.666s 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:08.029 ************************************ 00:37:08.029 END TEST nvmf_digest_error 00:37:08.029 ************************************ 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:08.029 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:08.029 rmmod nvme_tcp 00:37:08.029 rmmod nvme_fabrics 00:37:08.029 rmmod nvme_keyring 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3130988 ']' 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3130988 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3130988 ']' 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3130988 00:37:08.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3130988) - No such process 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3130988 is not found' 00:37:08.029 Process with pid 3130988 is not found 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.029 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:09.932 00:37:09.932 real 0m52.934s 00:37:09.932 user 1m35.077s 00:37:09.932 sys 0m11.258s 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:09.932 ************************************ 00:37:09.932 END TEST nvmf_digest 00:37:09.932 ************************************ 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.932 ************************************ 00:37:09.932 START TEST nvmf_bdevperf 00:37:09.932 ************************************ 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:09.932 * Looking for test storage... 00:37:09.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:37:09.932 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.192 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:10.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.193 --rc genhtml_branch_coverage=1 00:37:10.193 --rc genhtml_function_coverage=1 00:37:10.193 --rc genhtml_legend=1 00:37:10.193 --rc geninfo_all_blocks=1 00:37:10.193 --rc geninfo_unexecuted_blocks=1 00:37:10.193 00:37:10.193 ' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:10.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.193 --rc genhtml_branch_coverage=1 00:37:10.193 --rc genhtml_function_coverage=1 00:37:10.193 --rc genhtml_legend=1 00:37:10.193 --rc geninfo_all_blocks=1 00:37:10.193 --rc geninfo_unexecuted_blocks=1 00:37:10.193 00:37:10.193 ' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:10.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.193 --rc genhtml_branch_coverage=1 00:37:10.193 --rc genhtml_function_coverage=1 00:37:10.193 --rc genhtml_legend=1 00:37:10.193 --rc geninfo_all_blocks=1 00:37:10.193 --rc geninfo_unexecuted_blocks=1 00:37:10.193 00:37:10.193 ' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:10.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.193 --rc genhtml_branch_coverage=1 00:37:10.193 --rc genhtml_function_coverage=1 00:37:10.193 --rc genhtml_legend=1 00:37:10.193 --rc geninfo_all_blocks=1 00:37:10.193 --rc geninfo_unexecuted_blocks=1 00:37:10.193 00:37:10.193 ' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:10.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.193 18:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:12.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:12.154 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:12.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:12.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:12.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:12.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:37:12.155 00:37:12.155 --- 10.0.0.2 ping statistics --- 00:37:12.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.155 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:37:12.155 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:37:12.413 00:37:12.413 --- 10.0.0.1 ping statistics --- 00:37:12.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.413 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3135642 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3135642 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3135642 ']' 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:12.413 18:44:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.413 [2024-11-18 18:44:10.610226] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:12.413 [2024-11-18 18:44:10.610371] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.671 [2024-11-18 18:44:10.761991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:12.671 [2024-11-18 18:44:10.894996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.672 [2024-11-18 18:44:10.895087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.672 [2024-11-18 18:44:10.895113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.672 [2024-11-18 18:44:10.895139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.672 [2024-11-18 18:44:10.895160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.672 [2024-11-18 18:44:10.897790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.672 [2024-11-18 18:44:10.897844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.672 [2024-11-18 18:44:10.897850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.605 [2024-11-18 18:44:11.615709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.605 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.606 Malloc0 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.606 [2024-11-18 18:44:11.736313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:13.606 { 00:37:13.606 "params": { 00:37:13.606 "name": "Nvme$subsystem", 00:37:13.606 "trtype": "$TEST_TRANSPORT", 00:37:13.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.606 "adrfam": "ipv4", 00:37:13.606 "trsvcid": "$NVMF_PORT", 00:37:13.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.606 "hdgst": ${hdgst:-false}, 00:37:13.606 "ddgst": ${ddgst:-false} 00:37:13.606 }, 00:37:13.606 "method": "bdev_nvme_attach_controller" 00:37:13.606 } 00:37:13.606 EOF 00:37:13.606 )") 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:13.606 18:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:13.606 "params": { 00:37:13.606 "name": "Nvme1", 00:37:13.606 "trtype": "tcp", 00:37:13.606 "traddr": "10.0.0.2", 00:37:13.606 "adrfam": "ipv4", 00:37:13.606 "trsvcid": "4420", 00:37:13.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:13.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:13.606 "hdgst": false, 00:37:13.606 "ddgst": false 00:37:13.606 }, 00:37:13.606 "method": "bdev_nvme_attach_controller" 00:37:13.606 }' 00:37:13.606 [2024-11-18 18:44:11.829657] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:13.606 [2024-11-18 18:44:11.829791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135797 ] 00:37:13.864 [2024-11-18 18:44:11.969043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.864 [2024-11-18 18:44:12.097420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.430 Running I/O for 1 seconds... 00:37:15.364 6168.00 IOPS, 24.09 MiB/s 00:37:15.364 Latency(us) 00:37:15.364 [2024-11-18T17:44:13.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.364 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:15.364 Verification LBA range: start 0x0 length 0x4000 00:37:15.365 Nvme1n1 : 1.02 6195.90 24.20 0.00 0.00 20546.77 4369.07 18544.26 00:37:15.365 [2024-11-18T17:44:13.702Z] =================================================================================================================== 00:37:15.365 [2024-11-18T17:44:13.702Z] Total : 6195.90 24.20 0.00 0.00 20546.77 4369.07 18544.26 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3136184 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.299 { 00:37:16.299 "params": { 00:37:16.299 "name": "Nvme$subsystem", 00:37:16.299 "trtype": "$TEST_TRANSPORT", 00:37:16.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.299 "adrfam": "ipv4", 00:37:16.299 "trsvcid": "$NVMF_PORT", 00:37:16.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.299 "hdgst": ${hdgst:-false}, 00:37:16.299 "ddgst": ${ddgst:-false} 00:37:16.299 }, 00:37:16.299 "method": "bdev_nvme_attach_controller" 00:37:16.299 } 00:37:16.299 EOF 00:37:16.299 )") 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:16.299 18:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:16.299 "params": { 00:37:16.299 "name": "Nvme1", 00:37:16.299 "trtype": "tcp", 00:37:16.299 "traddr": "10.0.0.2", 00:37:16.299 "adrfam": "ipv4", 00:37:16.299 "trsvcid": "4420", 00:37:16.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.299 "hdgst": false, 00:37:16.299 "ddgst": false 00:37:16.299 }, 00:37:16.299 "method": "bdev_nvme_attach_controller" 00:37:16.299 }' 00:37:16.299 [2024-11-18 18:44:14.600174] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:16.299 [2024-11-18 18:44:14.600298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136184 ] 00:37:16.557 [2024-11-18 18:44:14.735102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.557 [2024-11-18 18:44:14.860108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.122 Running I/O for 15 seconds... 00:37:19.430 6284.00 IOPS, 24.55 MiB/s [2024-11-18T17:44:17.767Z] 6313.00 IOPS, 24.66 MiB/s [2024-11-18T17:44:17.767Z] 18:44:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3135642 00:37:19.430 18:44:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:19.430 [2024-11-18 18:44:17.546372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.430 [2024-11-18 18:44:17.546535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.546956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.546983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.547054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.547113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.547167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.547220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.430 [2024-11-18 18:44:17.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.430 [2024-11-18 18:44:17.547302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.547942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.547984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.548973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.548999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.549054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.431 [2024-11-18 18:44:17.549160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.431 [2024-11-18 18:44:17.549449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.431 [2024-11-18 18:44:17.549474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.549933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.549969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.432 [2024-11-18 18:44:17.550025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.550935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.550976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.432 [2024-11-18 18:44:17.551373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.432 [2024-11-18 18:44:17.551426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.432 [2024-11-18 18:44:17.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.432 [2024-11-18 18:44:17.551533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.432 [2024-11-18 18:44:17.551561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.432 [2024-11-18 18:44:17.551601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.433 [2024-11-18 18:44:17.551681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.433 [2024-11-18 18:44:17.551731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.433 [2024-11-18 18:44:17.551782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:19.433 [2024-11-18 18:44:17.551830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.551878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.551941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.551984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.552992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.433 [2024-11-18 18:44:17.553557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.553582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:19.433 [2024-11-18 18:44:17.553656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:19.433 [2024-11-18 18:44:17.553677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:19.433 [2024-11-18 18:44:17.553696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111584 len:8 PRP1 0x0 PRP2 0x0 00:37:19.433 [2024-11-18 18:44:17.553716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.554129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.433 [2024-11-18 18:44:17.554169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.433 [2024-11-18 18:44:17.554198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.433 [2024-11-18 18:44:17.554222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.434 [2024-11-18 18:44:17.554247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.434 [2024-11-18 18:44:17.554270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.434 [2024-11-18 18:44:17.554294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.434 [2024-11-18 18:44:17.554318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.434 [2024-11-18 18:44:17.554340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.558620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.558693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.559578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.559635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.559681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.559955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.560257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.560307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.560336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.560363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.573463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.574035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.574079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.574107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.574394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.574691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.574724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.574747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.574770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.588117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.588554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.588597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.588641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.588929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.589218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.589251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.589274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.589297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.602690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.603159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.603201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.603228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.603514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.603813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.603846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.603871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.603893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.617243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.617719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.617762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.617790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.618075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.618361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.618392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.618416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.618438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.631755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.632218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.632266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.632295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.632580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.632878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.632910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.632934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.632955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.646218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.646687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.646729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.646756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.647040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.647326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.647357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.647381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.647403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.660667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.661137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.661179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.661206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.661491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.661790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.661824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.661848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.434 [2024-11-18 18:44:17.661870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.434 [2024-11-18 18:44:17.675134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.434 [2024-11-18 18:44:17.675596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.434 [2024-11-18 18:44:17.675644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.434 [2024-11-18 18:44:17.675671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.434 [2024-11-18 18:44:17.675963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.434 [2024-11-18 18:44:17.676250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.434 [2024-11-18 18:44:17.676283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.434 [2024-11-18 18:44:17.676307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.676329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.689587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.690065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.690106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.690133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.690417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.690717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.690749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.690773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.690795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.704063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.704492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.704534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.704561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.704856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.705142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.705174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.705197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.705219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.718472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.718963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.719006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.719033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.719318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.719605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.719654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.719679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.719701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.732979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.733434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.733475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.733502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.733800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.734085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.734118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.734141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.734163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.747371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.747832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.747874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.747901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.748184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.748469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.748501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.748524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.748546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.435 [2024-11-18 18:44:17.761743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.435 [2024-11-18 18:44:17.762191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.435 [2024-11-18 18:44:17.762233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.435 [2024-11-18 18:44:17.762260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.435 [2024-11-18 18:44:17.762542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.435 [2024-11-18 18:44:17.762836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.435 [2024-11-18 18:44:17.762869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.435 [2024-11-18 18:44:17.762893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.435 [2024-11-18 18:44:17.762936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.694 [2024-11-18 18:44:17.776161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.694 [2024-11-18 18:44:17.776596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.776660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.776687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.776971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.777256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.777290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.777313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.777336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.790527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.790999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.791041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.791068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.791351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.791650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.791684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.791707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.791729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.804918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.805393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.805435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.805462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.805757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.806042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.806076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.806100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.806123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.819305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.819720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.819762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.819790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.820071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.820377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.820411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.820435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.820458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.833905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.834333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.834377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.834406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.834702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.834989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.835022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.835046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.835070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.848482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.849005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.849048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.849076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.849358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.849654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.849687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.849711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.849734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.862911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.863341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.863384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.863417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.863713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.863999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.864033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.864057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.864079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.877258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.877710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.877753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.877779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.878062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.878347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.878380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.878404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.878427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.891627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.892064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.892107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.892134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.892416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.892715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.892749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.695 [2024-11-18 18:44:17.892773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.695 [2024-11-18 18:44:17.892796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.695 [2024-11-18 18:44:17.906227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.695 [2024-11-18 18:44:17.906679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-11-18 18:44:17.906721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.695 [2024-11-18 18:44:17.906747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.695 [2024-11-18 18:44:17.907029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.695 [2024-11-18 18:44:17.907319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.695 [2024-11-18 18:44:17.907352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.907376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.907399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.920854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.921337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.921407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.921707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.921992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.922024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.922047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.922069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.935273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.935728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.935770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.935798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.936080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.936363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.936396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.936420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.936442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.949669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.950132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.950174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.950200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.950483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.950784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.950818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.950850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.950873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.964049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.964508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.964550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.964577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.964873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.965158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.965191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.965214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.965237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.978429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.978881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.978924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.978951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.979234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.979519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.979553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.979577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.979600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:17.992821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:17.993245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:17.993290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:17.993317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:17.993600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:17.993899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:17.993933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:17.993956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:17.993985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:18.007407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:18.007860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:18.007903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:18.007931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:18.008214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:18.008501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:18.008534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:18.008558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:18.008581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.696 [2024-11-18 18:44:18.021809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.696 [2024-11-18 18:44:18.022265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-11-18 18:44:18.022307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.696 [2024-11-18 18:44:18.022334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.696 [2024-11-18 18:44:18.022630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.696 [2024-11-18 18:44:18.022916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.696 [2024-11-18 18:44:18.022949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.696 [2024-11-18 18:44:18.022973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.696 [2024-11-18 18:44:18.022997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.036184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.036631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.036675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.036703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.036986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.037269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.037301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.037324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.037346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.050773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.051231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.051274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.051300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.051584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.051884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.051918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.051942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.051965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.065146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.065547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.065589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.065629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.065914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.066201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.066235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.066259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.066282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.079711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.080174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.080216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.080243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.080526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.080824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.080858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.080883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.080906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.094062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.094511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.094552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.094585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.094884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.095169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.095202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.095226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.095249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.108425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.108883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.108925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.108952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.109234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.109519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.109552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.109575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.109598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.122836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.123300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.123342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.123369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.123668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.123953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.123986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.124010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.124032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.137233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.137758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.137801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.137828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.138111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.138402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.138436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.138460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.138483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.956 [2024-11-18 18:44:18.151685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.956 [2024-11-18 18:44:18.152251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.956 [2024-11-18 18:44:18.152311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.956 [2024-11-18 18:44:18.152338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.956 [2024-11-18 18:44:18.152636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.956 [2024-11-18 18:44:18.152920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.956 [2024-11-18 18:44:18.152953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.956 [2024-11-18 18:44:18.152977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.956 [2024-11-18 18:44:18.153000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.166216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.166663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.166706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.166734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.167017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.167303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.167335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.167358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.167381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.180593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.181056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.181099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.181141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.181427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.181726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.181759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.181790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.181814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.194982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.195447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.195488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.195515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.195810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.196097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.196128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.196151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.196172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.209383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.209837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.209880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.209906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.210199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.210483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.210516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.210540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.210562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.223805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.224268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.224311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.224338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.224634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.224931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.224963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.224987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.225009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.238205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.238669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.238740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.239024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.239309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.239341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.239364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.239387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.252793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.253223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.253264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.253292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.253576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.253873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.253906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.253929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.253952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.267385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.267850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.267892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.267919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.268202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.268487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.268520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.268544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.268567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.957 [2024-11-18 18:44:18.281740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.957 [2024-11-18 18:44:18.282201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.957 [2024-11-18 18:44:18.282249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.957 [2024-11-18 18:44:18.282276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.957 [2024-11-18 18:44:18.282559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.957 [2024-11-18 18:44:18.282855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.957 [2024-11-18 18:44:18.282889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.957 [2024-11-18 18:44:18.282913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.957 [2024-11-18 18:44:18.282946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.296182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.296680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.296707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.296989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.297275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.297308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.297332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.297354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.310589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.311142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.311203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.311230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.311520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.311821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.311854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.311881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.311903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.325134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.325558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.325602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.325652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.325945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.326231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.326264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.326288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.326310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 4625.67 IOPS, 18.07 MiB/s [2024-11-18T17:44:18.554Z] [2024-11-18 18:44:18.340164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.340623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.340666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.340694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.340977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.341267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.341300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.341324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.341346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.354536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.355060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.355121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.355149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.355431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.355733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.355768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.355791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.355813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.369011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.369470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.369513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.369540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.369837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.370128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.370161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.370184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.370207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.383383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.383857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.383901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.383928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.384211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.384524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.384557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.217 [2024-11-18 18:44:18.384580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.217 [2024-11-18 18:44:18.384603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.217 [2024-11-18 18:44:18.397807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.217 [2024-11-18 18:44:18.398267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.217 [2024-11-18 18:44:18.398309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.217 [2024-11-18 18:44:18.398335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.217 [2024-11-18 18:44:18.398633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.217 [2024-11-18 18:44:18.398919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.217 [2024-11-18 18:44:18.398952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.398976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.398999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.412176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.412649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.412677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.412959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.413245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.413278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.413309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.413332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.426786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.427262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.427304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.427331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.427627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.427912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.427944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.427968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.427992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.441214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.441689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.441733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.441761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.442045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.442331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.442363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.442387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.442409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.455593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.456059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.456100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.456127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.456409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.456711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.456745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.456769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.456792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.469973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.470458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.470502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.470528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.470826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.471110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.471143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.471167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.471190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.484365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.484825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.484867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.484894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.485175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.485461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.485494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.485517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.485539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.498722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.499193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.499235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.499261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.499543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.499842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.499876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.499899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.499921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.513121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.513640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.513682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.513715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.514000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.514284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.514317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.514341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.514364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.527585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.528117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.528176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.528204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.528487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.218 [2024-11-18 18:44:18.528785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.218 [2024-11-18 18:44:18.528818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.218 [2024-11-18 18:44:18.528841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.218 [2024-11-18 18:44:18.528864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.218 [2024-11-18 18:44:18.542082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.218 [2024-11-18 18:44:18.542531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.218 [2024-11-18 18:44:18.542574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.218 [2024-11-18 18:44:18.542601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.218 [2024-11-18 18:44:18.542907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.219 [2024-11-18 18:44:18.543193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.219 [2024-11-18 18:44:18.543224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.219 [2024-11-18 18:44:18.543248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.219 [2024-11-18 18:44:18.543270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.478 [2024-11-18 18:44:18.556654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.478 [2024-11-18 18:44:18.557081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-11-18 18:44:18.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.478 [2024-11-18 18:44:18.557150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.478 [2024-11-18 18:44:18.557439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.478 [2024-11-18 18:44:18.557739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.478 [2024-11-18 18:44:18.557773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.478 [2024-11-18 18:44:18.557797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.478 [2024-11-18 18:44:18.557819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.478 [2024-11-18 18:44:18.571048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.478 [2024-11-18 18:44:18.571473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-11-18 18:44:18.571514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.478 [2024-11-18 18:44:18.571541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.478 [2024-11-18 18:44:18.571834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.478 [2024-11-18 18:44:18.572119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.478 [2024-11-18 18:44:18.572151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.478 [2024-11-18 18:44:18.572174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.478 [2024-11-18 18:44:18.572196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.478 [2024-11-18 18:44:18.585417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.478 [2024-11-18 18:44:18.585890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-11-18 18:44:18.585932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.478 [2024-11-18 18:44:18.585959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.478 [2024-11-18 18:44:18.586242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.478 [2024-11-18 18:44:18.586526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.478 [2024-11-18 18:44:18.586558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.478 [2024-11-18 18:44:18.586635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.478 [2024-11-18 18:44:18.586661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.478 [2024-11-18 18:44:18.599871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.478 [2024-11-18 18:44:18.600407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-11-18 18:44:18.600467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.478 [2024-11-18 18:44:18.600494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.478 [2024-11-18 18:44:18.600789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.478 [2024-11-18 18:44:18.601074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.478 [2024-11-18 18:44:18.601113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.478 [2024-11-18 18:44:18.601138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.478 [2024-11-18 18:44:18.601161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.478 [2024-11-18 18:44:18.614356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.478 [2024-11-18 18:44:18.614783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.478 [2024-11-18 18:44:18.614826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.614854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.615136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.615423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.615454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.615477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.615499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.628746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.629210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.629258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.629285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.629567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.629864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.629897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.629921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.629944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.643164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.643636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.643678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.643706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.643990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.644277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.644310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.644333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.644362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.657565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.658037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.658078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.658105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.658387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.658685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.658718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.658741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.658763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.671958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.672392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.672434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.672460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.672765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.673053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.673086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.673109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.673131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.686330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.686798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.686867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.687150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.687445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.687477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.687501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.687523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.700721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.701229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.701291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.479 [2024-11-18 18:44:18.701318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.479 [2024-11-18 18:44:18.701598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.479 [2024-11-18 18:44:18.701907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.479 [2024-11-18 18:44:18.701940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.479 [2024-11-18 18:44:18.701963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.479 [2024-11-18 18:44:18.701985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.479 [2024-11-18 18:44:18.715194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.479 [2024-11-18 18:44:18.715646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.479 [2024-11-18 18:44:18.715689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.715716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.715998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.716283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.716316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.716340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.716362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.729567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.730079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.730107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.730397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.730705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.730739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.730763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.730785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.743958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.744405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.744448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.744482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.744782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.745069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.745102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.745126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.745149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.758554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.758988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.759032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.759059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.759341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.759640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.759674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.759697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.759719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.773125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.773568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.773621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.773662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.773944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.774229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.774270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.774294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.774316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.787502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.787982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.788025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.788052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.788333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.788636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.788674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.788698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.788721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.480 [2024-11-18 18:44:18.801930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.480 [2024-11-18 18:44:18.802365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.480 [2024-11-18 18:44:18.802407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.480 [2024-11-18 18:44:18.802435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.480 [2024-11-18 18:44:18.802731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.480 [2024-11-18 18:44:18.803016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.480 [2024-11-18 18:44:18.803048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.480 [2024-11-18 18:44:18.803071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.480 [2024-11-18 18:44:18.803094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.739 [2024-11-18 18:44:18.816505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.739 [2024-11-18 18:44:18.816959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-11-18 18:44:18.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.739 [2024-11-18 18:44:18.817028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.739 [2024-11-18 18:44:18.817309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.739 [2024-11-18 18:44:18.817594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.739 [2024-11-18 18:44:18.817640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.817665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.817687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.830926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.831375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.831417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.831443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.831740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.832026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.832059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.832088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.832112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.845305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.845728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.845771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.845798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.846082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.846368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.846399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.846423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.846445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.859895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.860358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.860400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.860427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.860723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.861008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.861041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.861064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.861086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.874278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.874746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.874788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.874815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.875098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.875382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.875414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.875438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.875460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.888660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.889125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.889166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.889193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.889474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.889772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.889806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.889829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.889852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.903049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.903470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.903512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.903538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.903834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.904119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.904151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.904174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.904197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.917601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.918030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.918073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.918100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.918383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.918681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.918714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.918737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.918760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.931975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.932445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.932488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.932515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.932812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.933097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.933129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.933152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.933175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.946600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.947040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.947084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.947111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.740 [2024-11-18 18:44:18.947394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.740 [2024-11-18 18:44:18.947694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.740 [2024-11-18 18:44:18.947727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.740 [2024-11-18 18:44:18.947750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.740 [2024-11-18 18:44:18.947772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.740 [2024-11-18 18:44:18.960960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.740 [2024-11-18 18:44:18.961409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-11-18 18:44:18.961452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.740 [2024-11-18 18:44:18.961479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:18.961773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:18.962059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:18.962092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:18.962116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:18.962138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:18.975526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:18.975988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:18.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:18.976056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:18.976344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:18.976642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:18.976675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:18.976699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:18.976721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:18.989881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:18.990303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:18.990345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:18.990372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:18.990666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:18.990951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:18.990983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:18.991006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:18.991028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:19.004432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:19.004900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:19.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:19.004983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:19.005267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:19.005551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:19.005582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:19.005605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:19.005641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:19.018837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:19.019301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:19.019343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:19.019369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:19.019664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:19.019957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:19.019989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:19.020012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:19.020035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:19.033247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:19.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:19.033746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:19.033773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:19.034056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:19.034341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:19.034374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:19.034397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:19.034419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:19.047591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:19.048058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:19.048100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:19.048127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:19.048409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:19.048706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:19.048739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:19.048764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:19.048785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.741 [2024-11-18 18:44:19.061954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.741 [2024-11-18 18:44:19.062425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-11-18 18:44:19.062466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-11-18 18:44:19.062493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.741 [2024-11-18 18:44:19.062790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.741 [2024-11-18 18:44:19.063076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.741 [2024-11-18 18:44:19.063108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.741 [2024-11-18 18:44:19.063138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.741 [2024-11-18 18:44:19.063162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.076340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.076814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.076857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.076884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.077166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.077450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.077482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.077505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.077528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.090697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.091152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.091194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.091221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.091503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.091801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.091834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.091857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.091879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.105270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.105733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.105775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.105801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.106082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.106367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.106399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.106423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.106445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.119654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.120089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.120130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.120157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.120438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.120734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.120766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.120790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.120813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.134230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.134680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.134723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.134750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.135032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.135317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.135349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.135373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.135395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.148818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.149238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.149306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.149588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.149886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.149918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.149942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.149964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.163377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.163844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.163892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.163919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.164201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.164485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.164517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.164541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.164563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.177751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.178252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.178279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.178561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.178856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.178889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.178914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.178936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.192113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.192571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.192620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.192649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.192931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.193217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.999 [2024-11-18 18:44:19.193249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.999 [2024-11-18 18:44:19.193272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.999 [2024-11-18 18:44:19.193295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.999 [2024-11-18 18:44:19.206516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.999 [2024-11-18 18:44:19.206979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.999 [2024-11-18 18:44:19.207020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.999 [2024-11-18 18:44:19.207046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.999 [2024-11-18 18:44:19.207348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.999 [2024-11-18 18:44:19.207647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.207680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.207703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.207725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.220903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.221349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.221417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.221712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.221998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.222031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.222054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.222076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.235279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.235744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.235786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.235813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.236095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.236380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.236412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.236436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.236458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.249653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.250107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.250148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.250175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.250457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.250755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.250794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.250820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.250842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.264026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.264477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.264537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.264564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.264858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.265144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.265176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.265200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.265222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.278420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.278873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.278915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.278942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.279225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.279509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.279541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.279565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.279587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.293025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.293579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.293630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.293659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.293939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.294225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.294257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.294281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.294309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.307495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.307990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.308033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.308060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.308342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.308640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.308673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.308698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.308720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.000 [2024-11-18 18:44:19.321909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.000 [2024-11-18 18:44:19.322434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.000 [2024-11-18 18:44:19.322475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.000 [2024-11-18 18:44:19.322502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.000 [2024-11-18 18:44:19.322795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.000 [2024-11-18 18:44:19.323081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.000 [2024-11-18 18:44:19.323113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.000 [2024-11-18 18:44:19.323137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.000 [2024-11-18 18:44:19.323160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.336388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 3469.25 IOPS, 13.55 MiB/s [2024-11-18T17:44:19.596Z] [2024-11-18 18:44:19.338618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.338660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.338688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.338972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.339257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.339289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.339313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.339336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.350859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.351297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.351340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.351368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.351662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.351947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.351979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.352002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.352024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.365450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.365890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.365962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.366244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.366529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.366561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.366584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.366620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.379853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.380320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.380361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.380388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.380682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.380966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.380997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.381021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.381043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.394242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.394713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.394755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.394788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.395072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.395357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.395390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.395413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.395436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.408618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.409072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.409112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.409139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.409421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.409716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.409761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.409785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.409808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.422977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.423442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.423483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.259 [2024-11-18 18:44:19.423510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.259 [2024-11-18 18:44:19.423805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.259 [2024-11-18 18:44:19.424089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.259 [2024-11-18 18:44:19.424121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.259 [2024-11-18 18:44:19.424144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.259 [2024-11-18 18:44:19.424166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.259 [2024-11-18 18:44:19.437374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.259 [2024-11-18 18:44:19.437831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.259 [2024-11-18 18:44:19.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.437908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.438223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.438509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.438541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.438564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.438595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.451821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.452268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.452309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.452336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.452626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.452912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.452944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.452968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.452991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.466211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.466667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.466709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.466737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.467019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.467304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.467336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.467359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.467381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.480783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.481254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.481297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.481323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.481618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.481903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.481940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.481965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.481987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.495187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.495637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.495679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.495705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.495987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.496272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.496303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.496327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.496349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.509754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.510226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.510268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.510294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.510576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.510872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.510905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.510928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.510950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.524115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.524559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.524599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.524637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.524920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.525206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.525238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.525262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.525291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.538480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.538957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.538999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.539026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.539317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.539601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.539644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.539668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.539691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.552864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.553308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.260 [2024-11-18 18:44:19.553349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.260 [2024-11-18 18:44:19.553376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.260 [2024-11-18 18:44:19.553672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.260 [2024-11-18 18:44:19.553957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.260 [2024-11-18 18:44:19.553989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.260 [2024-11-18 18:44:19.554012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.260 [2024-11-18 18:44:19.554034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.260 [2024-11-18 18:44:19.567215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.260 [2024-11-18 18:44:19.567714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.261 [2024-11-18 18:44:19.567758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.261 [2024-11-18 18:44:19.567785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.261 [2024-11-18 18:44:19.568069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.261 [2024-11-18 18:44:19.568354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.261 [2024-11-18 18:44:19.568386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.261 [2024-11-18 18:44:19.568409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.261 [2024-11-18 18:44:19.568430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.261 [2024-11-18 18:44:19.581762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.261 [2024-11-18 18:44:19.582244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.261 [2024-11-18 18:44:19.582287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.261 [2024-11-18 18:44:19.582314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.261 [2024-11-18 18:44:19.582598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.261 [2024-11-18 18:44:19.582896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.261 [2024-11-18 18:44:19.582928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.261 [2024-11-18 18:44:19.582952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.261 [2024-11-18 18:44:19.582973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.596141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.596659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.596701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.596727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.597009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.597293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.597325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.597349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.597372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.610528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.610990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.611032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.611060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.611343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.611639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.611671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.611694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.611732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.624919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.625461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.625526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.625559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.625854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.626140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.626172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.626195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.626217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.639425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.639870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.639912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.639939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.640222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.640507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.640539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.640563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.640584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.654006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.654520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.654561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.654588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.654879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.655165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.655197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.655221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.655243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.668404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.668860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.668910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.668937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.669219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.669522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.669554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.669577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.669616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.682799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.683351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.683412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.683440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.683735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.684020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.684052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.684076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.684098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.697316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.697791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.520 [2024-11-18 18:44:19.697835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.520 [2024-11-18 18:44:19.697862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.520 [2024-11-18 18:44:19.698144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.520 [2024-11-18 18:44:19.698429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.520 [2024-11-18 18:44:19.698467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.520 [2024-11-18 18:44:19.698490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.520 [2024-11-18 18:44:19.698522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.520 [2024-11-18 18:44:19.711770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.520 [2024-11-18 18:44:19.712299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.712340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.712367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.712664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.712949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.712982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.713019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.713042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.726248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.726669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.726711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.726738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.727020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.727333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.727366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.727389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.727412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.740634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.741099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.741142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.741169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.741451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.741748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.741780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.741803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.741826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.755042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.755488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.755529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.755557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.755850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.756136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.756167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.756191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.756213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.769462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.769919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.769961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.769989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.770271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.770556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.770588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.770625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.770649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.783941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.784393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.784433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.784460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.784754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.785054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.785086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.785109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.785131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.798448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.798876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.798928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.798955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.799238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.799524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.799556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.799580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.799626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.812896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.813332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.813378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.813406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.813699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.813985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.814017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.814041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.814063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.827281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.827738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.827780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.827808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.828090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.828375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.828407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.828430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.828454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.521 [2024-11-18 18:44:19.841699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.521 [2024-11-18 18:44:19.842124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.521 [2024-11-18 18:44:19.842167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.521 [2024-11-18 18:44:19.842193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.521 [2024-11-18 18:44:19.842475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.521 [2024-11-18 18:44:19.842775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.521 [2024-11-18 18:44:19.842808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.521 [2024-11-18 18:44:19.842831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.521 [2024-11-18 18:44:19.842853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.856288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.856770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.856813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.856840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.857128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.857413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.857446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.857470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.857492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.870688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.871135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.871175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.871202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.871483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.871781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.871813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.871838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.871860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.885070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.885537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.885579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.885617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.885902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.886186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.886217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.886241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.886264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.899454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.899950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.900036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.900319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.900621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.900654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.900678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.900701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.913908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.914357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.914399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.914426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.914721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.915006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.915038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.915062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.915084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.781 [2024-11-18 18:44:19.928297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.781 [2024-11-18 18:44:19.928769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.781 [2024-11-18 18:44:19.928811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.781 [2024-11-18 18:44:19.928838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.781 [2024-11-18 18:44:19.929120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.781 [2024-11-18 18:44:19.929405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.781 [2024-11-18 18:44:19.929436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.781 [2024-11-18 18:44:19.929461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.781 [2024-11-18 18:44:19.929483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:19.942696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:19.943145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:19.943186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:19.943213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:19.943495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:19.943792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:19.943825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:19.943855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:19.943878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:19.957081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:19.957577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:19.957627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:19.957656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:19.957939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:19.958225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:19.958256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:19.958279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:19.958301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:19.971506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:19.972014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:19.972074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:19.972101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:19.972383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:19.972680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:19.972713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:19.972736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:19.972758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:19.985935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:19.986438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:19.986497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:19.986524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:19.986819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:19.987104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:19.987137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:19.987160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:19.987182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:20.000384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:20.000840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:20.000882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:20.000909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:20.001191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:20.001488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:20.001520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:20.001544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:20.001566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:20.014912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:20.015376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:20.015417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:20.015444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:20.015741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:20.016028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:20.016061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:20.016096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:20.016119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:20.029460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:20.030068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:20.030112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:20.030155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:20.030454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:20.030772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:20.030805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:20.030829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:20.030851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.782 [2024-11-18 18:44:20.044186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.782 [2024-11-18 18:44:20.044682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.782 [2024-11-18 18:44:20.044735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.782 [2024-11-18 18:44:20.044764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.782 [2024-11-18 18:44:20.045055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.782 [2024-11-18 18:44:20.045356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.782 [2024-11-18 18:44:20.045389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.782 [2024-11-18 18:44:20.045413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.782 [2024-11-18 18:44:20.045435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.783 [2024-11-18 18:44:20.058914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.783 [2024-11-18 18:44:20.059364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-11-18 18:44:20.059408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.783 [2024-11-18 18:44:20.059436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.783 [2024-11-18 18:44:20.059737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.783 [2024-11-18 18:44:20.060037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.783 [2024-11-18 18:44:20.060070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.783 [2024-11-18 18:44:20.060094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.783 [2024-11-18 18:44:20.060118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.783 [2024-11-18 18:44:20.073597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.783 [2024-11-18 18:44:20.074044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-11-18 18:44:20.074086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.783 [2024-11-18 18:44:20.074114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.783 [2024-11-18 18:44:20.074405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.783 [2024-11-18 18:44:20.074715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.783 [2024-11-18 18:44:20.074750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.783 [2024-11-18 18:44:20.074775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.783 [2024-11-18 18:44:20.074799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.783 [2024-11-18 18:44:20.088102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.783 [2024-11-18 18:44:20.088586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-11-18 18:44:20.088641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.783 [2024-11-18 18:44:20.088670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.783 [2024-11-18 18:44:20.088967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.783 [2024-11-18 18:44:20.089259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.783 [2024-11-18 18:44:20.089292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.783 [2024-11-18 18:44:20.089316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.783 [2024-11-18 18:44:20.089340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.783 [2024-11-18 18:44:20.102547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.783 [2024-11-18 18:44:20.103063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.783 [2024-11-18 18:44:20.103106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.783 [2024-11-18 18:44:20.103133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.783 [2024-11-18 18:44:20.103419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.783 [2024-11-18 18:44:20.103722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.783 [2024-11-18 18:44:20.103755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.783 [2024-11-18 18:44:20.103780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.783 [2024-11-18 18:44:20.103803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.042 [2024-11-18 18:44:20.117142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.042 [2024-11-18 18:44:20.117665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.042 [2024-11-18 18:44:20.117708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.042 [2024-11-18 18:44:20.117735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.042 [2024-11-18 18:44:20.118021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.042 [2024-11-18 18:44:20.118310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.042 [2024-11-18 18:44:20.118343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.042 [2024-11-18 18:44:20.118367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.042 [2024-11-18 18:44:20.118390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.042 [2024-11-18 18:44:20.131821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.042 [2024-11-18 18:44:20.132282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.042 [2024-11-18 18:44:20.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.042 [2024-11-18 18:44:20.132351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.042 [2024-11-18 18:44:20.132649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.042 [2024-11-18 18:44:20.132937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.042 [2024-11-18 18:44:20.132981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.042 [2024-11-18 18:44:20.133006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.042 [2024-11-18 18:44:20.133030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.042 [2024-11-18 18:44:20.146382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.042 [2024-11-18 18:44:20.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.042 [2024-11-18 18:44:20.146874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.042 [2024-11-18 18:44:20.146901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.042 [2024-11-18 18:44:20.147185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.042 [2024-11-18 18:44:20.147472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.042 [2024-11-18 18:44:20.147505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.042 [2024-11-18 18:44:20.147529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.042 [2024-11-18 18:44:20.147553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.042 [2024-11-18 18:44:20.160922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.042 [2024-11-18 18:44:20.161391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.042 [2024-11-18 18:44:20.161433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.042 [2024-11-18 18:44:20.161461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.042 [2024-11-18 18:44:20.161763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.042 [2024-11-18 18:44:20.162052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.042 [2024-11-18 18:44:20.162086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.042 [2024-11-18 18:44:20.162109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.042 [2024-11-18 18:44:20.162132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.175525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.175986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.176056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.176340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.176643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.176677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.176701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.176730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.190099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.190572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.190625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.190654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.190942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.191229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.191262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.191287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.191310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.204796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.205264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.205306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.205333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.205635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.205938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.205971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.205996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.206019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.219505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.219979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.220021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.220049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.220335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.220639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.220673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.220696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.220720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.233988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.234568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.234638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.234666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.234955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.235261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.235294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.235318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.235341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.248453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.248996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.249058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.249085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.249369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.249674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.249707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.249731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.249754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.263079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.263562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.263605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.263644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.263930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.264220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.264254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.264279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.264302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.277677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.278202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.278244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.278277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.278563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.043 [2024-11-18 18:44:20.278870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.043 [2024-11-18 18:44:20.278902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.043 [2024-11-18 18:44:20.278926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.043 [2024-11-18 18:44:20.278949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.043 [2024-11-18 18:44:20.292272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.043 [2024-11-18 18:44:20.292698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.043 [2024-11-18 18:44:20.292742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.043 [2024-11-18 18:44:20.292768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.043 [2024-11-18 18:44:20.293060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.293344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.293377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.293402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.293425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.044 [2024-11-18 18:44:20.306717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.044 [2024-11-18 18:44:20.307257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.044 [2024-11-18 18:44:20.307299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.044 [2024-11-18 18:44:20.307327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.044 [2024-11-18 18:44:20.307618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.307912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.307945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.307970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.307994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.044 [2024-11-18 18:44:20.321233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.044 [2024-11-18 18:44:20.321763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.044 [2024-11-18 18:44:20.321808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.044 [2024-11-18 18:44:20.321835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.044 [2024-11-18 18:44:20.322120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.322414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.322446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.322470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.322492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.044 [2024-11-18 18:44:20.335835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.044 [2024-11-18 18:44:20.336265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.044 [2024-11-18 18:44:20.336308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.044 [2024-11-18 18:44:20.336335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.044 [2024-11-18 18:44:20.336630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.336917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.336950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.336974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.336996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.044 2775.40 IOPS, 10.84 MiB/s [2024-11-18T17:44:20.381Z] [2024-11-18 18:44:20.350319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.044 [2024-11-18 18:44:20.350801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.044 [2024-11-18 18:44:20.350844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.044 [2024-11-18 18:44:20.350872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.044 [2024-11-18 18:44:20.351155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.351441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.351474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.351499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.351522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.044 [2024-11-18 18:44:20.364785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.044 [2024-11-18 18:44:20.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.044 [2024-11-18 18:44:20.365259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.044 [2024-11-18 18:44:20.365286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.044 [2024-11-18 18:44:20.365571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.044 [2024-11-18 18:44:20.365879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.044 [2024-11-18 18:44:20.365920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.044 [2024-11-18 18:44:20.365946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.044 [2024-11-18 18:44:20.365970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.379190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.379622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.379665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.379693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.379975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.380262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.380295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.380319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.380342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.393582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.394061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.394105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.394132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.394413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.394715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.394749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.394773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.394796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.407994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.408441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.408482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.408509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.408803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.409087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.409120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.409144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.409174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.422410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.422847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.422889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.422916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.423198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.423483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.423516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.423541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.423564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.436819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.437290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.437333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.437361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.437658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.437943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.437976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.438016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.438040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.451256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.451707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.451749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.451776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.452060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.452345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.452378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.452412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.452436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.465623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.466097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.466167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.466451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.466749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.466782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.466806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.466828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.480041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.480472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.480514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.480542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.480835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.304 [2024-11-18 18:44:20.481121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.304 [2024-11-18 18:44:20.481154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.304 [2024-11-18 18:44:20.481178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.304 [2024-11-18 18:44:20.481201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.304 [2024-11-18 18:44:20.494627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.304 [2024-11-18 18:44:20.495087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-18 18:44:20.495130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.304 [2024-11-18 18:44:20.495157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.304 [2024-11-18 18:44:20.495440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.495751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.495785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.495810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.495833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 [2024-11-18 18:44:20.509012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.509519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.509552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.509851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.510156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.510189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.510213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.510246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3135642 Killed "${NVMF_APP[@]}" "$@" 00:37:22.305 [2024-11-18 18:44:20.523429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:22.305 [2024-11-18 18:44:20.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:22.305 [2024-11-18 18:44:20.523937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.523964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:22.305 [2024-11-18 18:44:20.524247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.305 [2024-11-18 18:44:20.524532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.524563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.524587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.524621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3136850 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3136850 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3136850 ']' 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:22.305 18:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.305 [2024-11-18 18:44:20.537879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.538306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.538355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.538383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.538678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.538964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.538996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.539020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.539042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 [2024-11-18 18:44:20.552249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.552694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.552737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.552764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.553048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.553335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.553368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.553392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.553414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 [2024-11-18 18:44:20.566943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.567450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.567534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.567839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.568137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.568170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.568195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.568219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 [2024-11-18 18:44:20.581779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.582344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.582401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.582429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.582745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.583041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.305 [2024-11-18 18:44:20.583075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.305 [2024-11-18 18:44:20.583100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.305 [2024-11-18 18:44:20.583125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.305 [2024-11-18 18:44:20.596396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.305 [2024-11-18 18:44:20.596857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-18 18:44:20.596910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.305 [2024-11-18 18:44:20.596937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.305 [2024-11-18 18:44:20.597251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.305 [2024-11-18 18:44:20.597557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.306 [2024-11-18 18:44:20.597598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.306 [2024-11-18 18:44:20.597634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.306 [2024-11-18 18:44:20.597667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.306 [2024-11-18 18:44:20.611133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.306 [2024-11-18 18:44:20.611600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-18 18:44:20.611663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.306 [2024-11-18 18:44:20.611700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.306 [2024-11-18 18:44:20.611991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.306 [2024-11-18 18:44:20.612283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.306 [2024-11-18 18:44:20.612314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.306 [2024-11-18 18:44:20.612339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.306 [2024-11-18 18:44:20.612361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.306 [2024-11-18 18:44:20.625708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.306 [2024-11-18 18:44:20.626213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-18 18:44:20.626266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.306 [2024-11-18 18:44:20.626293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.306 [2024-11-18 18:44:20.626584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.306 [2024-11-18 18:44:20.626908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.306 [2024-11-18 18:44:20.626960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.306 [2024-11-18 18:44:20.626985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.306 [2024-11-18 18:44:20.627008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.306 [2024-11-18 18:44:20.630257] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:22.306 [2024-11-18 18:44:20.630393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.565 [2024-11-18 18:44:20.640331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.565 [2024-11-18 18:44:20.640792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.565 [2024-11-18 18:44:20.640835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.565 [2024-11-18 18:44:20.640862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.565 [2024-11-18 18:44:20.641152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.565 [2024-11-18 18:44:20.641453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.565 [2024-11-18 18:44:20.641485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.565 [2024-11-18 18:44:20.641509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.565 [2024-11-18 18:44:20.641532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.565 [2024-11-18 18:44:20.654893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.565 [2024-11-18 18:44:20.655351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.565 [2024-11-18 18:44:20.655400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.565 [2024-11-18 18:44:20.655426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.565 [2024-11-18 18:44:20.655730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.565 [2024-11-18 18:44:20.656023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.565 [2024-11-18 18:44:20.656054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.565 [2024-11-18 18:44:20.656078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.565 [2024-11-18 18:44:20.656101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.565 [2024-11-18 18:44:20.669434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.565 [2024-11-18 18:44:20.669922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.565 [2024-11-18 18:44:20.669976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.565 [2024-11-18 18:44:20.670005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.565 [2024-11-18 18:44:20.670303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.565 [2024-11-18 18:44:20.670628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.565 [2024-11-18 18:44:20.670661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.565 [2024-11-18 18:44:20.670685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.565 [2024-11-18 18:44:20.670708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.565 [2024-11-18 18:44:20.683500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.565 [2024-11-18 18:44:20.683914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.565 [2024-11-18 18:44:20.683951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.565 [2024-11-18 18:44:20.683988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.565 [2024-11-18 18:44:20.684238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.565 [2024-11-18 18:44:20.684490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.565 [2024-11-18 18:44:20.684517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.565 [2024-11-18 18:44:20.684537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.565 [2024-11-18 18:44:20.684555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.565 [2024-11-18 18:44:20.697335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.697802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.697850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.697873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.698158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.698389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.698414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.698434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.698452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.711283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.711672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.711720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.711743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.712033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.712266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.712292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.712317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.712336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.725196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.725636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.725694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.725736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.726033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.726267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.726292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.726312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.726329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.739819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.740336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.740386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.740414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.740721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.741006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.741038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.741062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.741084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.754337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.754839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.754884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.754908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.755211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.755502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.755534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.755557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.755579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.768848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.769307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.769357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.769384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.769690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.769957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.769989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.770014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.770037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.783240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.783776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.783824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.783848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.784147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.784438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.784470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.784494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.784516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.566 [2024-11-18 18:44:20.797586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:22.566 [2024-11-18 18:44:20.797934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.566 [2024-11-18 18:44:20.798378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.566 [2024-11-18 18:44:20.798420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.566 [2024-11-18 18:44:20.798457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.566 [2024-11-18 18:44:20.798757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.566 [2024-11-18 18:44:20.799016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.566 [2024-11-18 18:44:20.799046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.566 [2024-11-18 18:44:20.799069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.566 [2024-11-18 18:44:20.799089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.812431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.812983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.813029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.813069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.813359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.813680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.813710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.813733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.813756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.827153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.827864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.827925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.827958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.828256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.828563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.828598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.828638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.828689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.841826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.842299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.842349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.842376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.842698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.842968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.843001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.843025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.843048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.856375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.856850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.856901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.856946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.857246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.857546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.857578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.857602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.857640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.870777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.871242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.871285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.871313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.871600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.871878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.871921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.871941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.871978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.885159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.885649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.885688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.885712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.567 [2024-11-18 18:44:20.886006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.567 [2024-11-18 18:44:20.886294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.567 [2024-11-18 18:44:20.886327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.567 [2024-11-18 18:44:20.886351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.567 [2024-11-18 18:44:20.886384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.567 [2024-11-18 18:44:20.899802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.567 [2024-11-18 18:44:20.900264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.567 [2024-11-18 18:44:20.900303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.567 [2024-11-18 18:44:20.900327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.827 [2024-11-18 18:44:20.900651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.827 [2024-11-18 18:44:20.900960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.827 [2024-11-18 18:44:20.901007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.827 [2024-11-18 18:44:20.901029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.827 [2024-11-18 18:44:20.901068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.827 [2024-11-18 18:44:20.914309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.827 [2024-11-18 18:44:20.914830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.827 [2024-11-18 18:44:20.914869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.827 [2024-11-18 18:44:20.914903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.827 [2024-11-18 18:44:20.915207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.827 [2024-11-18 18:44:20.915513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.827 [2024-11-18 18:44:20.915545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.827 [2024-11-18 18:44:20.915569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.827 [2024-11-18 18:44:20.915592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.827 [2024-11-18 18:44:20.928897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.827 [2024-11-18 18:44:20.929500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.827 [2024-11-18 18:44:20.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.827 [2024-11-18 18:44:20.929563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.827 [2024-11-18 18:44:20.929833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.827 [2024-11-18 18:44:20.930139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.827 [2024-11-18 18:44:20.930178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.827 [2024-11-18 18:44:20.930200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.827 [2024-11-18 18:44:20.930220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.827 [2024-11-18 18:44:20.940300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.827 [2024-11-18 18:44:20.940352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.827 [2024-11-18 18:44:20.940378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.827 [2024-11-18 18:44:20.940403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.827 [2024-11-18 18:44:20.940423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.827 [2024-11-18 18:44:20.943201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:22.827 [2024-11-18 18:44:20.943292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.827 [2024-11-18 18:44:20.943298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:22.827 [2024-11-18 18:44:20.943523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.827 [2024-11-18 18:44:20.944015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.827 [2024-11-18 18:44:20.944054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.827 [2024-11-18 18:44:20.944078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.827 [2024-11-18 18:44:20.944370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.827 [2024-11-18 18:44:20.944645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.827 [2024-11-18 18:44:20.944686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.827 [2024-11-18 18:44:20.944708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.827 [2024-11-18 18:44:20.944729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.827 [2024-11-18 18:44:20.957883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.827 [2024-11-18 18:44:20.958659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.827 [2024-11-18 18:44:20.958711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.827 [2024-11-18 18:44:20.958753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.827 [2024-11-18 18:44:20.959061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.827 [2024-11-18 18:44:20.959326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.827 [2024-11-18 18:44:20.959356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.827 [2024-11-18 18:44:20.959383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.827 [2024-11-18 18:44:20.959410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.827 [2024-11-18 18:44:20.972200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.827 [2024-11-18 18:44:20.972696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:20.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:20.972772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:20.973068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:20.973325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:20.973355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:20.973376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:20.973396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:20.986438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:20.986891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:20.986930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:20.986961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:20.987261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:20.987515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:20.987542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:20.987563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:20.987582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.000527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.000928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.000992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.001264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.001509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.001537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.001557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.001576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.014620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.015069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.015108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.015132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.015407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.015686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.015716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.015737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.015756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.028947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.029688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.029746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.029779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.030079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.030340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.030371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.030398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.030424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.043080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.043835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.043892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.043924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.044226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.044486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.044517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.044545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.044571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.057272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.057850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.057927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.058237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.058489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.058517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.058540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.058564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.071488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.071897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.071935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.071960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.072232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.072481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.072514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.072535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.072554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.085637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.086059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.086098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.086123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.086415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.086713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.086743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.086764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.086784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.828 [2024-11-18 18:44:21.099657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.828 [2024-11-18 18:44:21.100128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.828 [2024-11-18 18:44:21.100166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.828 [2024-11-18 18:44:21.100190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.828 [2024-11-18 18:44:21.100475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.828 [2024-11-18 18:44:21.100770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.828 [2024-11-18 18:44:21.100801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.828 [2024-11-18 18:44:21.100822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.828 [2024-11-18 18:44:21.100841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.829 [2024-11-18 18:44:21.113633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.829 [2024-11-18 18:44:21.114110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.829 [2024-11-18 18:44:21.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.829 [2024-11-18 18:44:21.114172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.829 [2024-11-18 18:44:21.114458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.829 [2024-11-18 18:44:21.114751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.829 [2024-11-18 18:44:21.114781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.829 [2024-11-18 18:44:21.114804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.829 [2024-11-18 18:44:21.114833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.829 [2024-11-18 18:44:21.127489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.829 [2024-11-18 18:44:21.127914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.829 [2024-11-18 18:44:21.127953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.829 [2024-11-18 18:44:21.127977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.829 [2024-11-18 18:44:21.128260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.829 [2024-11-18 18:44:21.128504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.829 [2024-11-18 18:44:21.128531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.829 [2024-11-18 18:44:21.128551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.829 [2024-11-18 18:44:21.128569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.829 [2024-11-18 18:44:21.141542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.829 [2024-11-18 18:44:21.142019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.829 [2024-11-18 18:44:21.142057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.829 [2024-11-18 18:44:21.142082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.829 [2024-11-18 18:44:21.142367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.829 [2024-11-18 18:44:21.142640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.829 [2024-11-18 18:44:21.142670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.829 [2024-11-18 18:44:21.142692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.829 [2024-11-18 18:44:21.142712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.829 [2024-11-18 18:44:21.155565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.829 [2024-11-18 18:44:21.156059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.829 [2024-11-18 18:44:21.156097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.829 [2024-11-18 18:44:21.156121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.829 [2024-11-18 18:44:21.156408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.829 [2024-11-18 18:44:21.156685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.829 [2024-11-18 18:44:21.156715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.829 [2024-11-18 18:44:21.156737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.829 [2024-11-18 18:44:21.156757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.088 [2024-11-18 18:44:21.169782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.088 [2024-11-18 18:44:21.170421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.088 [2024-11-18 18:44:21.170468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.088 [2024-11-18 18:44:21.170497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.088 [2024-11-18 18:44:21.170792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.088 [2024-11-18 18:44:21.171070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.088 [2024-11-18 18:44:21.171099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.088 [2024-11-18 18:44:21.171123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.171147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.184028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.184684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.184735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.184767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.185074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.185348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.185379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.185406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.185432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.198361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.198776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.198816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.198840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.199125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.199375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.199402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.199422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.199441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.212542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.212955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.212993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.213024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.213316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.213565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.213617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.213641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.213677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.226497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.226910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.226949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.226973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.227261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.227507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.227535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.227556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.227575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.240434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.240901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.240938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.240963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.241250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.241495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.241523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.241543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.241563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.254450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.254884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.254922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.254947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.255231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.255479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.255520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.255540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.255560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.268479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.268919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.268958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.268984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.269272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.269517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.269545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.269565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.269600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.282602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.283117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.283158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.283185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.283473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.283752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.283782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.283804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.283825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.296950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.297431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.297469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.297494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.297764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.298053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.298081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.298109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.298130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.311019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.311491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.311529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.311554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.311823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.312113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.312141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.312162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.312182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.325129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.325548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.325597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.325632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.325901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.326162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.326189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.326209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.326228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.339106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.339515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.339553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.339577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.339855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.340113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.340140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.340161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.340180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 2312.83 IOPS, 9.03 MiB/s [2024-11-18T17:44:21.426Z] [2024-11-18 18:44:21.353166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.353556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.353594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.353627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.353884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.354165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.354193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.354213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.354231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.367220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.367601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.089 [2024-11-18 18:44:21.367646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.089 [2024-11-18 18:44:21.367671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.089 [2024-11-18 18:44:21.367945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.089 [2024-11-18 18:44:21.368212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.089 [2024-11-18 18:44:21.368239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.089 [2024-11-18 18:44:21.368259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.089 [2024-11-18 18:44:21.368278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.089 [2024-11-18 18:44:21.381145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.089 [2024-11-18 18:44:21.381538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.090 [2024-11-18 18:44:21.381576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.090 [2024-11-18 18:44:21.381600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.090 [2024-11-18 18:44:21.381880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.090 [2024-11-18 18:44:21.382139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.090 [2024-11-18 18:44:21.382167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.090 [2024-11-18 18:44:21.382187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.090 [2024-11-18 18:44:21.382206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.090 [2024-11-18 18:44:21.395051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.090 [2024-11-18 18:44:21.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.090 [2024-11-18 18:44:21.395497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.090 [2024-11-18 18:44:21.395521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.090 [2024-11-18 18:44:21.395802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.090 [2024-11-18 18:44:21.396063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.090 [2024-11-18 18:44:21.396091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.090 [2024-11-18 18:44:21.396111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.090 [2024-11-18 18:44:21.396129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.090 [2024-11-18 18:44:21.409010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.090 [2024-11-18 18:44:21.409456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.090 [2024-11-18 18:44:21.409494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.090 [2024-11-18 18:44:21.409518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.090 [2024-11-18 18:44:21.409798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.090 [2024-11-18 18:44:21.410061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.090 [2024-11-18 18:44:21.410088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.090 [2024-11-18 18:44:21.410108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.090 [2024-11-18 18:44:21.410126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.090 [2024-11-18 18:44:21.423285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.350 [2024-11-18 18:44:21.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.350 [2024-11-18 18:44:21.423728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.350 [2024-11-18 18:44:21.423753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.350 [2024-11-18 18:44:21.424010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.350 [2024-11-18 18:44:21.424293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.350 [2024-11-18 18:44:21.424321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.350 [2024-11-18 18:44:21.424341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.350 [2024-11-18 18:44:21.424360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.350 [2024-11-18 18:44:21.437369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.350 [2024-11-18 18:44:21.437795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.350 [2024-11-18 18:44:21.437832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.350 [2024-11-18 18:44:21.437862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.350 [2024-11-18 18:44:21.438133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.350 [2024-11-18 18:44:21.438406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.350 [2024-11-18 18:44:21.438435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.350 [2024-11-18 18:44:21.438457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.350 [2024-11-18 18:44:21.438477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.350 [2024-11-18 18:44:21.451429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.350 [2024-11-18 18:44:21.451835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.350 [2024-11-18 18:44:21.451873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.350 [2024-11-18 18:44:21.451898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.350 [2024-11-18 18:44:21.452181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.350 [2024-11-18 18:44:21.452423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.350 [2024-11-18 18:44:21.452450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.350 [2024-11-18 18:44:21.452470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.350 [2024-11-18 18:44:21.452504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.350 [2024-11-18 18:44:21.465518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.350 [2024-11-18 18:44:21.465910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.350 [2024-11-18 18:44:21.465949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.350 [2024-11-18 18:44:21.465973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.350 [2024-11-18 18:44:21.466245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.350 [2024-11-18 18:44:21.466495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.350 [2024-11-18 18:44:21.466523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.350 [2024-11-18 18:44:21.466543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.350 [2024-11-18 18:44:21.466563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.350 [2024-11-18 18:44:21.479615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.350 [2024-11-18 18:44:21.480018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.480071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.480096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.480365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.480641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.480670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.480691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.480710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.493561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.494062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.494101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.494126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.494409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.494682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.494711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.494732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.494751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.507553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.508027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.508066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.508091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.508360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.508618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.508662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.508685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.508705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.521461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.521867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.521905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.521930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.522213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.522454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.522481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.522507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.522528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.535419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.535849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.535888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.535912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.536180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.536421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.536448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.536468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.536487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.549573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.550084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.550122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.550147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.550417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.550697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.550726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.550748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.550768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:23.351 [2024-11-18 18:44:21.563857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.351 [2024-11-18 18:44:21.564253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.351 [2024-11-18 18:44:21.564291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.564316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.564574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.564842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.564879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.564902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.564938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 [2024-11-18 18:44:21.577947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.351 [2024-11-18 18:44:21.578407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.351 [2024-11-18 18:44:21.578446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.351 [2024-11-18 18:44:21.578470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.351 [2024-11-18 18:44:21.578737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.351 [2024-11-18 18:44:21.579040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.351 [2024-11-18 18:44:21.579067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.351 [2024-11-18 18:44:21.579087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.351 [2024-11-18 18:44:21.579107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.351 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.352 [2024-11-18 18:44:21.586792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.352 [2024-11-18 18:44:21.592227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.592648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.592687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.592711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.592998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.593245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.593273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.593293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.593313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.352 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.352 [2024-11-18 18:44:21.606695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.607182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.607221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.607247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.607554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.607850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.607880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.607929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.607950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 [2024-11-18 18:44:21.620730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.621166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.621203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.621228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.621516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.621796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.621825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.621846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.621866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 [2024-11-18 18:44:21.635074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.635732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.635780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.635810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.636109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.636367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.636396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.636421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.636445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 [2024-11-18 18:44:21.649217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.649621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.649659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.649690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.649983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.650230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.650257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.650277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.650297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 [2024-11-18 18:44:21.663519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.663971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.664010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.664036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.664327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.664623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.664652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.664674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.664695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.352 [2024-11-18 18:44:21.677631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.352 [2024-11-18 18:44:21.678064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.352 [2024-11-18 18:44:21.678102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.352 [2024-11-18 18:44:21.678127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.352 [2024-11-18 18:44:21.678414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.352 [2024-11-18 18:44:21.678689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.352 [2024-11-18 18:44:21.678718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.352 [2024-11-18 18:44:21.678740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.352 [2024-11-18 18:44:21.678759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.611 Malloc0 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.611 [2024-11-18 18:44:21.691793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.611 [2024-11-18 18:44:21.692180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.611 [2024-11-18 18:44:21.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.611 [2024-11-18 18:44:21.692249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.611 [2024-11-18 18:44:21.692510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.611 [2024-11-18 18:44:21.692781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.611 [2024-11-18 18:44:21.692812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.611 [2024-11-18 18:44:21.692834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.611 [2024-11-18 18:44:21.692854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.611 [2024-11-18 18:44:21.706087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.611 [2024-11-18 18:44:21.706474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.611 [2024-11-18 18:44:21.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.611 [2024-11-18 18:44:21.706536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.611 [2024-11-18 18:44:21.706813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.611 [2024-11-18 18:44:21.707084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.611 [2024-11-18 18:44:21.707112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.611 [2024-11-18 18:44:21.707133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.611 [2024-11-18 18:44:21.707153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.611 [2024-11-18 18:44:21.709700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.611 18:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3136184 00:37:23.611 [2024-11-18 18:44:21.720323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.611 [2024-11-18 18:44:21.801641] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:25.109 2458.29 IOPS, 9.60 MiB/s [2024-11-18T17:44:24.378Z] 2924.38 IOPS, 11.42 MiB/s [2024-11-18T17:44:25.751Z] 3289.56 IOPS, 12.85 MiB/s [2024-11-18T17:44:26.685Z] 3592.00 IOPS, 14.03 MiB/s [2024-11-18T17:44:27.620Z] 3831.64 IOPS, 14.97 MiB/s [2024-11-18T17:44:28.553Z] 4025.33 IOPS, 15.72 MiB/s [2024-11-18T17:44:29.487Z] 4196.54 IOPS, 16.39 MiB/s [2024-11-18T17:44:30.419Z] 4341.57 IOPS, 16.96 MiB/s 00:37:32.082 Latency(us) 00:37:32.082 [2024-11-18T17:44:30.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.082 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:32.082 Verification LBA range: start 0x0 length 0x4000 00:37:32.082 Nvme1n1 : 15.01 4469.39 17.46 9223.39 0.00 9319.23 1159.02 39030.33 00:37:32.082 [2024-11-18T17:44:30.419Z] =================================================================================================================== 00:37:32.082 [2024-11-18T17:44:30.419Z] Total : 4469.39 17.46 9223.39 0.00 9319.23 1159.02 39030.33 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.014 rmmod nvme_tcp 00:37:33.014 rmmod nvme_fabrics 00:37:33.014 rmmod nvme_keyring 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3136850 ']' 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3136850 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3136850 ']' 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3136850 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3136850 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3136850' 00:37:33.014 killing process with pid 3136850 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3136850 00:37:33.014 18:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3136850 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.386 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.287 00:37:36.287 real 0m26.442s 00:37:36.287 user 1m11.933s 00:37:36.287 sys 0m4.868s 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.287 ************************************ 00:37:36.287 END TEST nvmf_bdevperf 00:37:36.287 ************************************ 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.287 18:44:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.546 ************************************ 00:37:36.546 START TEST nvmf_target_disconnect 00:37:36.546 ************************************ 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:36.546 * Looking for test storage... 00:37:36.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.546 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:36.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.547 --rc genhtml_branch_coverage=1 00:37:36.547 --rc genhtml_function_coverage=1 00:37:36.547 --rc genhtml_legend=1 00:37:36.547 --rc geninfo_all_blocks=1 00:37:36.547 --rc geninfo_unexecuted_blocks=1 00:37:36.547 00:37:36.547 ' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:36.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.547 --rc genhtml_branch_coverage=1 00:37:36.547 --rc genhtml_function_coverage=1 00:37:36.547 --rc genhtml_legend=1 00:37:36.547 --rc geninfo_all_blocks=1 00:37:36.547 --rc geninfo_unexecuted_blocks=1 00:37:36.547 00:37:36.547 ' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:36.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.547 --rc genhtml_branch_coverage=1 00:37:36.547 --rc genhtml_function_coverage=1 00:37:36.547 --rc genhtml_legend=1 00:37:36.547 --rc geninfo_all_blocks=1 00:37:36.547 --rc geninfo_unexecuted_blocks=1 00:37:36.547 00:37:36.547 ' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:36.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.547 --rc genhtml_branch_coverage=1 00:37:36.547 --rc genhtml_function_coverage=1 00:37:36.547 --rc genhtml_legend=1 00:37:36.547 --rc geninfo_all_blocks=1 00:37:36.547 --rc geninfo_unexecuted_blocks=1 00:37:36.547 00:37:36.547 ' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:36.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.547 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.548 18:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:38.450 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:38.450 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:38.450 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:38.450 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:38.450 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:38.451 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:38.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:37:38.709 00:37:38.709 --- 10.0.0.2 ping statistics --- 00:37:38.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.709 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:38.709 00:37:38.709 --- 10.0.0.1 ping statistics --- 00:37:38.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.709 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.709 ************************************ 00:37:38.709 START TEST nvmf_target_disconnect_tc1 00:37:38.709 ************************************ 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:38.709 18:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:38.968 [2024-11-18 18:44:37.136798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-11-18 18:44:37.136909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-11-18 18:44:37.137015] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:38.968 [2024-11-18 18:44:37.137056] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:38.968 [2024-11-18 18:44:37.137084] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:38.968 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:38.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:38.968 Initializing NVMe Controllers 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:38.968 00:37:38.968 real 0m0.238s 00:37:38.968 user 0m0.113s 00:37:38.968 sys 0m0.124s 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:38.968 ************************************ 00:37:38.968 END TEST nvmf_target_disconnect_tc1 00:37:38.968 ************************************ 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:38.968 ************************************ 00:37:38.968 START TEST nvmf_target_disconnect_tc2 00:37:38.968 ************************************ 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3140258 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3140258 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3140258 ']' 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.968 18:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.226 [2024-11-18 18:44:37.326505] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:39.226 [2024-11-18 18:44:37.326649] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:39.226 [2024-11-18 18:44:37.468878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:39.485 [2024-11-18 18:44:37.599649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.485 [2024-11-18 18:44:37.599724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.485 [2024-11-18 18:44:37.599747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.485 [2024-11-18 18:44:37.599768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.485 [2024-11-18 18:44:37.599785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.485 [2024-11-18 18:44:37.602437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:39.485 [2024-11-18 18:44:37.602508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:39.485 [2024-11-18 18:44:37.602539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:39.485 [2024-11-18 18:44:37.602547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:40.051 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.051 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:40.051 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:40.051 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:40.051 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.309 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.309 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 Malloc0 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 [2024-11-18 18:44:38.498296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 [2024-11-18 18:44:38.530707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3140418 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:40.310 18:44:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:42.857 18:44:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3140258 00:37:42.857 18:44:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 [2024-11-18 18:44:40.568743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Write completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 [2024-11-18 18:44:40.569366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.857 Read completed with error (sct=0, sc=8) 00:37:42.857 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 [2024-11-18 18:44:40.570027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Write completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 Read completed with error (sct=0, sc=8) 00:37:42.858 starting I/O failed 00:37:42.858 [2024-11-18 18:44:40.570685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:42.858 [2024-11-18 18:44:40.570848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.570908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.571939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.572082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.572117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.572259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.572292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.572469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.572521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.572682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.572717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.572832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.572867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.573899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.573944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.574106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.574171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.574345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.574405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.574602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.574667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.574783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.574818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.575005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.575038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.575207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.858 [2024-11-18 18:44:40.575240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.858 qpair failed and we were unable to recover it. 00:37:42.858 [2024-11-18 18:44:40.575362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.575396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.575537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.575580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.575727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.575760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.575918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.576252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.576473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.576687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.576829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.576985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.577019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.577225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.577259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.577377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.577411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.577588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.577646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.577773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.577823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.577980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.578179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.578376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.578561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.578748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.578909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.578957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.579154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.579189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.579328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.579362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.579539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.579598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.579724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.579759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.579906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.579954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.580158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.580292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.580464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.580672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.580826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.580973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.581008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.581150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.581184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.581358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.581392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.581655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.581690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.581811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.581845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.581992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.582157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.582356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.582553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.582726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.582904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.582941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.583138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.583180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.583287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.583321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.583470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.859 [2024-11-18 18:44:40.583508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.859 qpair failed and we were unable to recover it. 00:37:42.859 [2024-11-18 18:44:40.583650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.583686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.583820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.583853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.584841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.584877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.585043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.585099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.585327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.585362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.585534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.585572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.585755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.585792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.585920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.585969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.586183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.586242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.586397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.586431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.586563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.586614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.586714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.586748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.586888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.586933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.587077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.587113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.587267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.587303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.587464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.587502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.587643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.587696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.587820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.587855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.588108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.588251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.588466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.588645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.588821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.588980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.589033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.589187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.589224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.589441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.589476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.589601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.589661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.589788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.589837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.589973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.590264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.590300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.590473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.590631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.590669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.590783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.590823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.590980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.591030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.591224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.591260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.591428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.591467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.591581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.591635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.591771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.860 [2024-11-18 18:44:40.591806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.860 qpair failed and we were unable to recover it. 00:37:42.860 [2024-11-18 18:44:40.591967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.592116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.592251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.592425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.592640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.592836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.592871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.593930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.593964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.594100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.594134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.594260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.594293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.594481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.594515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.594650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.594698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.594943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.595134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.595171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.595307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.595353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.595495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.595529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.595757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.595793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.595935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.595969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.596118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.596286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.596517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.596724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.596865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.596977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.597150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.597331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.597488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.597701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.597916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.597965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.598114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.598151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.598298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.598357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.598556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.598717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.598752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.598884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.861 [2024-11-18 18:44:40.598929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.861 qpair failed and we were unable to recover it. 00:37:42.861 [2024-11-18 18:44:40.599100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.599138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.599301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.599337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.599484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.599534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.599710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.599745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.599846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.599880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.600043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.600211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.600404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.600555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.600788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.600960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.601033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.601163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.601200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.601341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.601376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.601567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.601621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.601819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.601868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.601993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.602179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.602356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.602536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.602743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.602906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.603131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.603198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.603419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.603453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.603568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.603620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.603744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.603778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.603937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.604174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.604271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.604446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.604484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.604674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.604709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.604824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.604859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.604995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.605162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.605395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.605581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.605743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.605921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.605955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.606094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.606151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.606292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.606329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.606528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.606578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.606706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.606740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.606865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.606909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.607016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.607050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.607226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.607260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.862 [2024-11-18 18:44:40.607384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.862 [2024-11-18 18:44:40.607432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.862 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.607603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.607660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.607800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.607836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.607993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.608855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.608977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.609011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.609166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.609204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.609434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.609630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.609676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.609785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.609820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.609990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.610024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.610144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.610203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.610461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.610666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.610701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.610811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.610846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.610988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.611180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.611368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.611582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.611760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.611962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.611996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.612162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.612214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.612426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.612469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.612618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.612652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.612815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.612863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.613982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.614142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.614192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.614324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.614358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.614522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.614648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.614693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.614874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.615023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.615061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.615257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.615294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.615468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.615506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.615700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.863 [2024-11-18 18:44:40.615742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.863 qpair failed and we were unable to recover it. 00:37:42.863 [2024-11-18 18:44:40.615880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.616080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.616266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.616471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.616654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.616818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.616952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.617170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.617351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.617519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.617695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.617902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.617953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.618098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.618150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.618320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.618355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.618526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.618690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.618725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.618840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.618875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.619938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.619988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.620217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.620255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.620401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.620437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.620654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.620796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.620831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.621038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.621092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.621372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.621441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.621632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.621691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.621808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.621843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.622003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.622037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.622265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.622303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.622461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.622496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.622640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.622681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.622814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.622847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.623112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.623146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.623256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.623290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.623426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.623464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.623628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.623684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.623846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.623882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.624027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.624065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.624208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.624261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.864 [2024-11-18 18:44:40.624408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.864 [2024-11-18 18:44:40.624443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.864 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.624673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.624823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.624858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.625086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.625260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.625298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.625443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.625481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.625640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.625676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.625823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.625872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.626025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.626062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.626278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.626328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.626479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.626518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.626696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.626750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.626897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.626933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.627054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.627108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.627387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.627461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.627603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.627650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.627788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.627965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.628170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.628556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.628735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.628883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.628925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.629095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.629128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.629341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.629375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.629535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.629569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.629751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.629947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.629983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.630145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.630363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.630421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.630576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.630620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.630763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.630797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.630976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.631013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.631159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.631210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.631362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.631412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.631572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.631605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.631720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.631753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.631955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.632009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.632198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.632270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.632433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.632467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.632633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.632669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.632807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.865 [2024-11-18 18:44:40.632841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.865 qpair failed and we were unable to recover it. 00:37:42.865 [2024-11-18 18:44:40.632985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.633184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.633405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.633574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.633727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.633919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.633959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.634099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.634136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.634300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.634335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.634503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.634538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.634688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.634835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.634871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.635041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.635108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.635266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.635301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.635423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.635459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.635659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.635850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.635886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.636921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.636954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.637159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.637342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.637384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.637568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.637618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.637767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.637802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.637964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.637999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.638160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.638195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.638366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.638537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.638572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.638692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.638727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.638851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.638885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.639898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.639950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.640129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.640163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.640271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.640306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.640462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.640496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.640643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.640678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.640842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.640876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.641037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.641089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.641267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.641329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.641475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.641509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.641656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.866 [2024-11-18 18:44:40.641692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.866 qpair failed and we were unable to recover it. 00:37:42.866 [2024-11-18 18:44:40.641797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.641831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.641968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.642827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.642978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.643113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.643280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.643458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.643707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.643911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.643960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.644111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.644160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.644288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.644324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.644462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.644496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.644666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.644830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.644868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.645017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.645055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.645210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.645247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.645457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.645494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.645680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.645714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.645829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.645864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.646950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.647181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.647218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.647380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.647418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.647616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.647651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.647809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.647843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.647984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.648033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.648196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.648252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.648442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.648477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.648639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.648673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.648889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.649927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.649977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.650136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.650185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.650310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.650494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.867 [2024-11-18 18:44:40.650529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.867 qpair failed and we were unable to recover it. 00:37:42.867 [2024-11-18 18:44:40.650672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.650706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.650861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.650915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.651083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.651378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.651538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.651690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.651828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.651987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.652025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.652203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.652240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.652414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.652463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.652672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.652809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.652844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.653031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.653086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.653236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.653288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.653464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.653498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.653612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.653655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.653789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.653822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.654921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.654970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.655172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.655250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.655371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.655409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.655518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.655556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.655721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.655782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.655889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.655926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.656057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.656109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.656331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.656438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.656473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.656622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.656656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.656797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.657866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.658086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.658286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.658448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.658632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.658812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.658997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.659064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.659309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.659367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.659523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.659561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.659769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.659805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.659906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.659940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.868 qpair failed and we were unable to recover it. 00:37:42.868 [2024-11-18 18:44:40.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.868 [2024-11-18 18:44:40.660155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.660311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.660349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.660527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.660561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.660684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.660719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.660882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.660920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.661125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.661174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.661295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.661333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.661522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.661576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.661766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.661815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.661945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.661984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.662116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.662151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.662312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.662346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.662482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.662516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.662660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.662695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.662848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.663063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.663100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.663264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.663436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.663471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.663615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.663651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.663790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.663825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.664000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.664035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.664144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.664179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.664389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.664427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.664578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.664633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.664790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.664839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.665011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.665047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.665214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.665269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.665422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.665621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.665674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.665872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.665907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.666076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.666250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.666444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.666604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.666852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.666999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.667305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.667460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.667633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.667856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.667893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.668059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.668240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.668381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.668580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.869 [2024-11-18 18:44:40.668752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.869 qpair failed and we were unable to recover it. 00:37:42.869 [2024-11-18 18:44:40.668894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.668932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.669137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.669190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.669314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.669378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.669516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.669550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.669664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.669839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.669876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.670891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.670944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.671918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.671957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.672947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.672980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.673118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.673152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.673263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.673297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.673431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.673467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.673855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.673909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.674034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.674086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.674254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.674287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.674426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.674461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.674569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.674603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.674756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.674809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.675025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.675084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.675285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.675350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.675498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.675533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.675645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.675681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.675842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.676070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.676274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.676467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.676686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.676853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.676991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.677025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.677161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.677228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.677440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.677625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.677661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.677797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.677831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.677980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.870 [2024-11-18 18:44:40.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.870 qpair failed and we were unable to recover it. 00:37:42.870 [2024-11-18 18:44:40.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.678199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.678363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.678399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.678557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.678600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.678744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.678894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.678928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.679955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.680092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.680129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.680243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.680286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.680462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.680517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.680679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.680715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.680840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.680903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.681051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.681101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.681301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.681336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.681472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.681506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.681662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.681698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.681852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.681893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.682946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.682980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.683139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.683173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.683393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.683431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.683568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.683625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.683791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.683825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.683958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.683993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.684205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.684377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.684572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.684736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.684878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.684998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.685036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.685181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.685219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.685336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.685375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.685568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.685628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.685783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.685837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.686956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.687150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.871 [2024-11-18 18:44:40.687341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.871 [2024-11-18 18:44:40.687375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.871 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.687529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.687564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.687700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.687735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.687915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.687954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.688093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.688127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.688302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.688337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.688467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.688500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.688678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.688713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.688846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.688881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.689038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.689072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.689219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.689257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.689412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.689456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.689639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.689688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.689873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.689910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.690050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.690086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.690267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.690306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.690451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.690489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.690662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.690696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.690833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.690869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.691865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.691900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.692850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.692993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.693026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.693171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.693224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.693359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.693394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.693545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.693580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.693776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.693827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.693981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.694021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.694172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.694224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.694412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.694463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.694624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.694659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.694804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.694838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.695907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.695967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.696135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.696262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.696300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.872 [2024-11-18 18:44:40.696408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.872 [2024-11-18 18:44:40.696442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.872 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.696579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.696624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.696779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.696837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.697966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.697999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.698193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.698357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.698529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.698674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.698834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.698970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.699180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.699393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.699621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.699775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.699928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.700169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.700360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.700395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.700529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.700574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.700759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.700793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.700890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.700924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.701063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.701098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.701257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.701295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.701470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.701507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.701647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.701815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.701850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.702846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.702880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.703962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.703997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.704163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.704198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.704313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.704347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.704482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.704515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.704637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.704671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.704852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.704901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.705090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.705266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.705449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.705592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.873 [2024-11-18 18:44:40.705808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.873 qpair failed and we were unable to recover it. 00:37:42.873 [2024-11-18 18:44:40.705949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.706192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.706364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.706535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.706710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.706885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.707988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.708161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.708194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.708367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.708405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.708634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.708772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.708824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.709012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.709225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.709285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.709429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.709483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.709621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.709656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.709821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.709856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.710029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.710089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.710215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.710263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.710517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.710582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.710778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.710813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.710967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.711004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.711139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.711192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.711362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.711400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.711558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.711596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.711759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.711793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.711994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.712209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.712363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.712534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.712708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.712908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.712942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.713084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.713267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.713493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.713703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.713877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.714198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.714498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.714702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.714872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.714925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.715852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.874 [2024-11-18 18:44:40.715903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.874 qpair failed and we were unable to recover it. 00:37:42.874 [2024-11-18 18:44:40.716107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.716145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.716318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.716356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.716509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.716547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.716708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.716911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.716950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.717134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.717301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.717456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.717685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.717859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.717991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.718042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.718222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.718260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.718416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.718467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.718652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.718701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.718851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.718888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.718997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.719171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.719340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.719509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.719871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.719905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.720911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.721872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.721906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.722965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.722998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.723883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.723917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.724052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.724086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.724234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.724269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.724405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.724451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.724565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.875 [2024-11-18 18:44:40.724605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.875 qpair failed and we were unable to recover it. 00:37:42.875 [2024-11-18 18:44:40.724749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.724782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.724939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.724972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.725965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.725999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.726123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.726157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.726299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.726333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.726471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.726505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.726641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.726836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.726870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.727878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.727913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.728892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.728926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.729915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.729968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.730166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.730308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.730487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.730663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.730835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.730982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.731858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.731993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.732955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.732990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.733944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.733992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.734162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.734211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.876 qpair failed and we were unable to recover it. 00:37:42.876 [2024-11-18 18:44:40.734329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.876 [2024-11-18 18:44:40.734364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.734500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.734534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.734675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.734809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.734844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.735919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.735970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.736182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.736335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.736374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.736509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.736543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.736669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.736703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.736838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.736871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.737893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.737927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.738073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.738126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.738285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.738320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.738472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.738511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.738693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.738758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.738928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.738982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.739102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.739142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.739313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.739366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.739548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.739582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.739693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.739728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.739832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.739867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.740017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.740071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.740258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.740311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.740473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.740508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.740687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.740741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.740866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.740904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.741101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.741155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.741342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.741394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.741565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.741677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.741816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.741850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.742014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.742066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.742217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.742270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.742408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.742442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.742631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.742683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.742876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.742929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.743136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.743308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.743453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.743658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.743902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.743999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.744148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.744317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.744512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.744685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.877 [2024-11-18 18:44:40.744879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.877 [2024-11-18 18:44:40.744913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.877 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.745050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.745103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.745387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.745520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.745559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.745785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.745952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.745991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.746961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.747135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.747188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.747307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.747344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.747474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.747509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.747660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.747699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.747862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.747914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.748110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.748145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.748254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.748289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.748389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.748424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.748584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.748640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.748820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.748856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.749067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.749149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.749453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.749520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.749705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.749740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.749896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.749934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.750137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.750175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.750321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.750359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.750510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.750549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.750733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.750783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.750952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.751131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.751375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.751588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.751782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.751949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.751986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.752151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.752189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.752367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.752405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.752534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.752571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.752746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.752781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.752945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.752982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.753157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.753194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.753339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.753376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.753547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.753596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.753781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.753829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.754071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.754281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.754541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.754689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.754872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.754999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.755034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.755162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.755196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.878 qpair failed and we were unable to recover it. 00:37:42.878 [2024-11-18 18:44:40.755375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.878 [2024-11-18 18:44:40.755412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.755573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.755725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.755761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.755962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.756007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.756257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.756297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.756439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.756493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.756702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.756802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.756836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.756968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.757016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.757181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.757216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.757478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.757516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.757677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.757712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.757813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.757847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.758947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.758995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.759174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.759241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.759411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.759465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.759574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.759616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.759775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.759813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.759973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.760011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.760264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.760320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.760461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.760499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.760642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.760694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.760798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.760832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.761065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.761229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.761429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.761578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.761813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.761971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.762193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.762395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.762564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.762749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.762922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.762956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.763086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.763119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.763253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.763287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.763414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.763448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.763597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.763802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.763840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.764066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.764252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.764446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.764625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.764822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.764952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.765148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.765355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.765551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.765711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.765893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.765941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.879 [2024-11-18 18:44:40.766255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.879 qpair failed and we were unable to recover it. 00:37:42.879 [2024-11-18 18:44:40.766442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.766509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.766682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.766717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.766829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.766862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.766985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.767217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.767387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.767519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.767726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.767897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.767934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.768063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.768098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.768225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.768260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.768408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.768445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.768624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.768696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.768850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.768902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.769062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.769231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.769401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.769643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.769822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.769967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.770004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.770181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.770218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.770396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.770433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.770604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.770664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.770829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.770985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.771173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.771367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.771529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.771734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.771939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.771973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.772083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.772117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.772244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.772460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.772623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.772678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.772849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.772900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.773084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.773271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.773452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.773691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.773858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.773981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.774186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.774352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.774568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.774801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.774932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.774965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.775095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.775135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.775272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.775313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.775493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.775527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.775657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.775691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.775829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.776898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.880 [2024-11-18 18:44:40.776948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.880 qpair failed and we were unable to recover it. 00:37:42.880 [2024-11-18 18:44:40.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.777192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.777378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.777431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.777588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.777631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.777792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.777983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.778229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.778418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.778596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.778774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.778937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.778971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.779103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.779137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.779266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.779305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.779482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.779522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.779711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.779746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.779896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.779930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.780091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.780167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.780309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.780379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.780558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.780591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.780755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.780790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.780976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.781862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.781999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.782220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.782387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.782565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.782739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.782903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.782937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.783083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.783117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.783286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.783323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.783498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.783552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.783723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.783761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.783903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.784962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.784996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.785138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.785172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.785340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.785378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.785598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.785639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.785770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.785803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.785923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.785961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.786165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.786202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.786373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.786409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.786554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.786591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.786736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.786786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.787013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.787197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.787250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.787402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.787440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.881 qpair failed and we were unable to recover it. 00:37:42.881 [2024-11-18 18:44:40.787560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.881 [2024-11-18 18:44:40.787595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.787748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.787782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.787931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.787966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.788145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.788207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.788351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.788388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.788560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.788597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.788793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.788842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.789003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.789063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.789243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.789302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.789433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.789466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.789622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.789657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.789792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.789836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.790935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.790969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.791121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.791158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.791273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.791311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.791476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.791641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.791678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.791840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.791876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.792082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.792311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.792502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.792668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.792833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.792968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.793958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.793991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.794941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.794978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.795168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.795205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.795383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.795421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.795549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.795587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.795732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.795766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.795894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.795927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.796868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.796901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.797049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.797083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.797309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.797346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.797494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.797530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.797708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.797747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.797917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.797955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.798086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.798136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.798284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.798334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.798501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.798538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.798706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.798740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.798934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.882 [2024-11-18 18:44:40.799065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.882 [2024-11-18 18:44:40.799099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.882 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.799255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.799293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.799448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.799485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.799615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.799649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.799781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.799816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.799972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.800169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.800445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.800629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.800783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.800933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.800966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.801113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.801147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.801321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.801358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.801486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.801682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.801855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.801888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.802071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.802242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.802421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.802653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.802834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.802999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.803050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.803225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.803262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.803464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.803501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.803687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.803721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.803902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.803939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.804062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.804099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.804296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.804333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.804479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.804516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.804699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.804733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.804870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.804922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.805081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.805289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.805481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.805651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.805866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.805965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.806018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.806198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.806247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.806391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.806426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.806632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.806666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.806789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.806826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.807127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.807301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.807478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.807666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.807840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.807875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.808892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.808944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.809113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.809163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.809315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.809349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.809515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.809549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.809671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.809722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.809902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.809952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.810079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.810113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.810258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.810292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.810405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.810439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.810578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.810618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.810783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.883 [2024-11-18 18:44:40.810820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.883 qpair failed and we were unable to recover it. 00:37:42.883 [2024-11-18 18:44:40.811002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.811036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.811216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.811399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.811436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.811579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.811624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.811757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.811791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.811956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.812159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.812386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.812578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.812949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.812990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.813170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.813207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.813365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.813399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.813493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.813526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.813736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.813770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.813946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.813996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.814155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.814188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.814351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.814385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.814545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.814582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.814711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.814748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.814909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.814942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.815946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.815980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.816169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.816206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.816365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.816397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.816536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.816570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.816741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.816778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.816912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.816950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.817958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.817992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.818121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.818158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.818336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.818484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.818517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.818625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.818676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.818840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.818873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.819088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.819305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.819494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.819715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.819858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.819968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.820137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.820521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.820721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.820890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.820924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.821060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.821097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.821272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.821453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.821486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.821659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.884 [2024-11-18 18:44:40.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.884 qpair failed and we were unable to recover it. 00:37:42.884 [2024-11-18 18:44:40.821805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.821842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.821996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.822217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.822403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.822559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.822740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.822908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.822941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.823077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.823111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.823279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.823313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.823448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.823622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.823658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.823795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.823830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.824829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.824988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.825022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.825150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:42.885 [2024-11-18 18:44:40.825402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.825457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.825618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.825675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.825787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.825823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.825956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.826011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.826172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.826212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.826343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.826380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.826560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.826780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.826818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.827006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.827040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.827233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.827293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.827542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.827707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.827741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.827847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.827881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.828873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.828906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.829926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.829959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.830097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.830130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.830238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.830276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.830397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.830434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.830567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.830617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.830871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.830906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.831101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.831140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.831278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.831317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.831557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.831732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.831770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.831934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.831968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.832107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.832244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.832446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.832645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.832824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.832984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.833038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.833206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.833240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.833396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.833434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.833546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.885 [2024-11-18 18:44:40.833584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.885 qpair failed and we were unable to recover it. 00:37:42.885 [2024-11-18 18:44:40.833754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.833788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.833895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.833929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.834130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.834181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.834336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.834370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.834683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.834841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.834877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.835093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.835255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.835457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.835596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.835802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.835967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.836193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.836387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.836547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.836717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.836881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.836918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.837087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.837246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.837576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.837792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.837980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.838021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.838246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.838281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.838420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.838460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.838617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.838840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.838875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.839040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.839096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.839239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.839543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.839593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.839742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.839777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.839922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.839957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.840093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.840129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.840236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.840287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.840431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.840471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.840645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.840682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.840811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.840859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.841949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.841983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.842263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.842664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.842852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.842968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.843188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.843382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.843552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.843929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.843964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.844072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.844125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.844355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.844394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.844541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.844575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.844725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.844761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.844875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.844910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.845073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.845106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.845313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.845382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.845556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.845600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.845761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.845795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.886 [2024-11-18 18:44:40.846084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.886 qpair failed and we were unable to recover it. 00:37:42.886 [2024-11-18 18:44:40.846304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.846339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.846484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.846713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.846748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.846851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.846886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.846986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.847157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.847316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.847494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.847674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.847868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.847904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.848859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.848916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.849079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.849114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.849255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.849290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.849453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.849493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.849648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.849684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.849847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.849881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.850107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.850173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.850334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.850368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.850487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.850523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.850706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.850909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.850944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.851163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.851232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.851349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.851389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.851564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.851602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.851773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.851808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.851931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.851969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.852121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.852156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.852266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.852496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.852693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.852730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.852946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.852986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.853227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.853266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.853469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.853509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.853637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.853703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.853823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.853859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.854031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.854066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.854223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.854307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.854502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.854538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.854676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.854710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.854822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.854856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.855886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.855920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.856095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.856297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.856437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.856682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.856841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.856970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.857022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.857166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.857204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.857325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.857360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.857491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.887 [2024-11-18 18:44:40.857526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.887 qpair failed and we were unable to recover it. 00:37:42.887 [2024-11-18 18:44:40.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.857813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.857847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.857956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.857990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.858148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.858186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.858339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.858373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.858498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.858549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.858724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.858759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.858898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.858932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.859831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.859866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.860931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.860970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.861928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.861962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.862894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.862928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.863104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.863279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.863431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.863577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.863812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.863971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.864374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.864569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.864718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.864945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.864982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.865134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.865168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.865282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.865316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.865455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.865662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.865697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.865866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.865901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.866823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.866874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.867852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.867886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.868053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.868089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.868215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.868250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.868366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.868400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.868533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.868572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.888 [2024-11-18 18:44:40.868727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.888 [2024-11-18 18:44:40.868762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.888 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.868875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.868908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.869072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.869206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.869423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.869599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.869975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.870194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.870332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.870477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.870705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.870877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.870910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.871916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.871950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.872056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.872090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.872260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.872315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.872463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.872500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.872655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.872692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.872845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.872904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.873928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.874896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.875956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.876190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.876375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.876577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.876717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.876750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.876861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.876895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.877896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.877939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.878929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.879064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.879102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.879327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.879365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.879479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.879514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.879685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.879722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.879876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.879911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.880102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.880156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.880286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.880340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.889 [2024-11-18 18:44:40.880500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.889 [2024-11-18 18:44:40.880535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.889 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.880650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.880685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.880788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.880824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.880951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.881187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.881349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.881544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.881722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.881897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.881933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.882906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.883880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.883916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.884897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.884949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.885099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.885351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.885404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.885540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.885592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.885755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.885804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.886130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.886295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.886488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.886688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.886872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.886977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.887159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.887373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.887518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.887682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.887858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.887891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.888019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.888057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.888272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.888338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.888521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.888560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.890 [2024-11-18 18:44:40.888724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.890 [2024-11-18 18:44:40.888759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.890 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.888872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.888929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.889092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.889305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.889507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.889716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.889861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.889997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.890052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.890227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.890265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.890422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.890476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.890598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.890643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.890770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.890819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.890951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.891167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.891404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.891597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.891765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.891929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.891997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.892175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.892241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.892439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.892478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.892626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.892679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.892820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.892868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.893912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.893947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.894077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.894115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.894260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.894300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.894425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.894464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.894614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.894820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.894858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.895859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.895892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.896868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.896938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.897110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.897326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.897517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.897718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.891 [2024-11-18 18:44:40.897857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.891 qpair failed and we were unable to recover it. 00:37:42.891 [2024-11-18 18:44:40.897995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.898029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.898186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.898223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.898385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.898423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.898589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.898667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.898790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.898827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.899924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.899957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.900085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.900123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.900296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.900470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.900508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.900706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.900889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.900925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.901106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.901273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.901338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.901550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.901712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.901761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.901873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.901929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.902188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.902279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.902424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.902462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.902587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.902632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.902767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.902801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.902917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.902951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.903093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.903127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.903306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.903373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.903489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.903526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.903749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.903910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.903959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.904135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.904190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.904376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.904545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.904581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.904710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.904760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.904882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.904918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.905067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.905262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.905445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.905642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.905794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.905958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.906019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.906187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.906227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.906399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.906438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.906587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.906632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.906785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.906834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.907881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.907919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.892 [2024-11-18 18:44:40.908030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.892 [2024-11-18 18:44:40.908068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.892 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.908194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.908235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.908410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.908449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.908598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.908661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.908833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.909095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.909135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.909258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.909297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.909414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.909454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.909639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.909688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.909824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.909873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.910011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.910051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.910200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.910238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.910412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.910472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.910635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.910689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.910816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.910864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.911040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.911106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.911291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.911352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.911522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.911558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.911697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.911743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.911870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.911924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.912900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.912948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.913102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.913138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.913237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.913271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.913376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.913410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.913563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.913624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.913834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.914034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.914074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.914228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.914289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.914421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.914485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.914663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.914699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.914820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.914874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.915880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.915914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.916854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.916893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.893 [2024-11-18 18:44:40.917023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.893 [2024-11-18 18:44:40.917057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.893 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.917171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.917204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.917333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.917382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.917507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.917556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.917702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.917961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.918188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.918253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.918407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.918444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.918569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.918766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.918814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.919082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.919315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.919522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.919722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.919870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.919994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.920192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.920349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.920528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.920761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.920927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.920964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.921072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.921107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.921215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.921250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.921420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.921488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.921645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.921695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.921816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.921853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.922076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.922135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.922287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.922528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.922658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.922864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.923980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.924130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.924270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.924471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.924661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.924821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.924949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.925002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.925162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.925385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.925445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.925599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.925642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.925767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.925807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.925984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.926155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.926367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.926534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.926717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.926881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.926919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.894 qpair failed and we were unable to recover it. 00:37:42.894 [2024-11-18 18:44:40.927101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.894 [2024-11-18 18:44:40.927156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.927308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.927502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.927537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.927683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.927721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.927891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.927944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.928935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.928969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.929925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.929959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.930853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.930908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.931830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.931869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.932890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.932929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.933136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.933189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.933308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.933345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.933499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.933531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.933692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.933732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.933857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.933896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.934126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.934355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.934500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.934538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.934711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.934745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.934870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.934908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.935907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.935948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.936111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.895 [2024-11-18 18:44:40.936163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.895 qpair failed and we were unable to recover it. 00:37:42.895 [2024-11-18 18:44:40.936314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.936366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.936482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.936516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.936651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.936685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.936820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.936854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.936997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.937829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.937992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.938057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.938277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.938336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.938470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.938505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.938623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.938657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.938776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.938820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.938992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.939175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.939342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.939529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.939716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.939890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.939946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.940099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.940311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.940537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.940571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.940703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.940757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.940907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.940942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.941895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.941931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.942142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.942373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.942514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.942725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.942884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.942993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.943169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.943357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.943568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.943766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.943930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.944220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.944260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.944380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.944428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.944622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.944658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.944794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.944829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.944935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.944986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.945196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.945341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.945379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.945533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.896 [2024-11-18 18:44:40.945573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.896 qpair failed and we were unable to recover it. 00:37:42.896 [2024-11-18 18:44:40.945745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.945779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.945883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.945935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.946101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.946283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.946476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.946695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.946864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.946973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.947146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.947372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.947563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.947786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.947945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.947994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.948162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.948215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.948345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.948410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.948578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.948617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.948755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.948788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.948932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.949926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.949963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.950091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.950137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.950245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.950280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.950435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.950482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.950632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.950668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.950837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.950882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.951096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.951151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.951344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.951403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.951555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.951590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.951716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.951868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.951941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.952156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.952195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.952313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.952351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.952512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.952567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.952748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.952798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.953927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.954184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.954221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.954368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.954405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.954517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.954554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.954740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.954912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.954948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.955129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.955182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.955308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.955346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.955486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.955539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.955671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.955706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.955840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.955873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.956045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.956079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.956191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.956225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.897 qpair failed and we were unable to recover it. 00:37:42.897 [2024-11-18 18:44:40.956389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.897 [2024-11-18 18:44:40.956434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.956601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.956640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.956747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.956781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.956902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.956936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.957919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.958082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.958134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.958291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.958329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.958486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.958662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.958712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.958870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.958908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.959962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.960950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.960990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.961143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.961187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.961328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.961368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.961513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.961552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.961754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.961905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.961941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.962152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.962287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.962441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.962666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.962978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.963147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.963300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.963477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.963704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.963874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.963907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.964913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.964965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.965117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.965155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.965393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.965430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.965583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.965626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.965910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.965948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.966107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.966294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.966347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.966500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.898 [2024-11-18 18:44:40.966549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.898 qpair failed and we were unable to recover it. 00:37:42.898 [2024-11-18 18:44:40.966705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.966740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.966875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.966927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.967942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.967976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.968143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.968180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.968325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.968362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.968504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.968546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.968730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.968765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.968903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.968937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.969092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.969130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.969273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.969310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.969471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.969508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.969636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.969688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.969844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.969910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.970123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.970350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.970527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.970671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.970809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.970965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.971154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.971351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.971531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.971711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.971900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.971937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.972126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.972302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.972457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.972682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.972839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.972973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.973200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.973356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.973763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.973934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.973968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.974128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.974167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.974307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.974365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.974500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.974551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.974721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.974770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.974907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.974950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.975131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.975318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.975481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.975691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.975849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.975978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.976035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.899 [2024-11-18 18:44:40.976234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.899 [2024-11-18 18:44:40.976272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.899 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.976401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.976439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.976581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.976636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.976770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.976804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.976923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.976972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.977195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.977246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.977451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.977488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.977620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.977656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.977793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.977828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.977976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.978158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.978361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.978550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.978738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.978899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.978938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.979150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.979188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.979367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.979408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.979521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.979559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.979696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.979842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.979876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.980041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.980216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.980444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.980654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.980822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.980993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.981145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.981306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.981470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.981670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.981826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.981874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.982047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.982104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.982265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.982319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.982449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.982488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.982682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.982731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.982852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.982906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.983084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.983122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.983248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.983305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.983457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.983514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.983705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.983760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.983918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.983967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.984144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.984185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.984347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.984399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.984531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.984566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.984750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.984978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.985031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.985289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.985345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.985455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.985493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.985671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.985708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.985831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.985866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.986064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.986264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.986501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.986682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.986856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.987181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.987240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.987435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.987492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.987616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.987672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.987843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.900 [2024-11-18 18:44:40.987882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.900 qpair failed and we were unable to recover it. 00:37:42.900 [2024-11-18 18:44:40.988021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.988060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.988241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.988299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.988430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.988469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.988644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.988698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.988846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.988880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.989029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.989068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.989276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.989335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.989483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.989520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.989717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.989833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.989870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.990015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.990053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.990178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.990216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.990403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.990646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.990701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.990840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.990875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.991092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.991130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.991277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.991314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.991422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.991459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.991603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.991649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.991800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.991854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.992888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.992924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.993885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.993938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.994105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.994173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.994313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.994355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.994495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.994531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.994644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.994680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.994819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.994871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.995060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.995243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.995432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.995618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.995792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.995973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.996012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.996180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.996220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.996404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.996442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.996586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.996650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.996816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.996866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.997027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.997081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.997263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.997322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.997494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.997551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.997695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.997729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.997842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.997895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.998039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.998099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.998390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.998453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.998570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.998613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.998771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.998805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.998907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.998940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.999111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.999273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-11-18 18:44:40.999332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.901 qpair failed and we were unable to recover it. 00:37:42.901 [2024-11-18 18:44:40.999475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:40.999516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:40.999667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:40.999717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:40.999843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:40.999879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:40.999989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.000129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.000303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.000582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.000777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.000944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.000983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.001216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.001256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.001397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.001436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.001623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.001676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.001789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.001822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.001954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.002169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.002225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.002403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.002442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.002628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.002690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.002796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.002829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.002928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.002968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.003078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.003112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.003224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.003257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.003409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.003448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.003579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.003629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.003807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.003856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.004013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.004049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.004190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.004244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.004356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.004394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.004585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.004664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.004818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.004853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.005030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.005102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.005260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.005302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.005461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.005680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.005716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.005828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.005874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.006904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.006958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.007227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.007359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.007398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.007660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.007831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.007880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.008089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.008148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.008338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.008396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.008579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.008630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.008793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.008828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.008936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.008988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.009169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.009229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.009445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.009484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.009643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.009698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.009850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.009920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.010169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.010339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.010378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.010500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.010538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.010673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.010707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.010830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.010879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.902 qpair failed and we were unable to recover it. 00:37:42.902 [2024-11-18 18:44:41.011054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-11-18 18:44:41.011108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.011279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.011314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.011460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.011497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.011662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.011697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.011804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.011856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.012858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.012891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.013918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.013966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.014159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.014220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.014391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.014451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.014563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.014600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.014789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.014938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.014988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.015099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.015136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.015294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.015340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.015528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.015566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.015710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.015855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.015889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.016119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.016272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.016671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.016971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.017854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.017990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.018136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.018341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.018504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.018667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.018850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.019943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.019984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.020147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.020201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.020314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.020367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.020507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.020676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.020712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.020829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.020863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.021028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.021066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.021283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.021342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.021466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.021503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.021721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.021854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.021892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.022027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.022064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.022238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.022275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.903 [2024-11-18 18:44:41.022505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.903 [2024-11-18 18:44:41.022554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.903 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.022696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.022744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.022879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.022940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.023950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.023984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.024920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.024958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.025126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.025163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.025311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.025348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.025500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.025540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.025693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.025741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.025889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.025925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.026115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.026168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.026312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.026367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.026479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.026513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.026656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.026690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.026837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.026872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.027888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.027943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.028958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.028993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.029916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.029950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.030922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.031136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.031191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.031297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.031332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.031483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.031517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.031632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.031803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.031837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.032873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.032924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.033073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.033139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.033340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.033391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.033522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.904 [2024-11-18 18:44:41.033557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.904 qpair failed and we were unable to recover it. 00:37:42.904 [2024-11-18 18:44:41.033704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.033738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.033877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.033911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.034866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.034902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.035093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.035265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.035438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.035827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.035974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.036150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.036312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.036464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.036655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.036852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.037836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.037869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.038858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.038893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.039849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.039883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.040899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.040997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.041853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.041977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.042010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.042141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.042175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.042375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.042429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.042585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.042628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.042764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.042828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.042974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.043026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.043176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.043227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.043345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.043379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.043518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.043557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.905 [2024-11-18 18:44:41.043687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.905 [2024-11-18 18:44:41.043723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.905 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.043836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.043870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.043984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.044162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.044312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.044491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.044644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.044797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.044849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.045993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.046813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.046992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.047144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.047347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.047518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.047713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.047950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.048157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.048505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.048678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.048862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.048996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.049873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.049993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.050144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.050388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.050750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.050899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.050936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.051109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.051146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.051317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.051355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.051476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.051513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.051708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.051743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.051858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.051896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.052929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.052979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.053948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.053985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.054454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.054590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.054973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.055027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.906 qpair failed and we were unable to recover it. 00:37:42.906 [2024-11-18 18:44:41.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.906 [2024-11-18 18:44:41.055211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.055315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.055348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.055512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.055551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.055720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.055759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.055880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.055917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.056087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.056124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.056266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.056304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.056426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.056464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.056619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.056849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.056883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.057876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.057909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.058828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.058863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.059058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.059249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.059438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.059662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.059825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.059988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.060863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.060968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.061163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.061356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.061560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.061760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.061947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.061996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.062143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.062200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.062363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.062420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.062581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.062620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.062737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.062771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.062919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.062973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.063157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.063325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.063359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.063495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.063689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.063740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.063891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.063927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.064950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.064985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.065962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.065996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.066162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.066199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.066405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.066442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.066589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.066635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.907 [2024-11-18 18:44:41.066766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.907 [2024-11-18 18:44:41.066800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.907 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.066926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.066960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.067094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.067147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.067300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.067519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.067556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.067753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.067787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.067936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.067974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.068153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.068191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.068338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.068375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.068492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.068529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.068753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.068905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.068942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.069060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.069097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.069296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.069335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.069470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.069503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.069684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.069719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.069828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.069869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.070887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.070922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.071948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.071983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.072958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.072997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.073115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.073152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.073322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.073359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.073520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.073556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.073708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.073742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.073897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.073949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.074950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.074984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.075109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.075146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.075272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.075458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.075495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.075657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.075704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.075851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.076941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.076980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.077091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.077125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.077285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.077319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.077449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.077483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.077622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.908 [2024-11-18 18:44:41.077657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.908 qpair failed and we were unable to recover it. 00:37:42.908 [2024-11-18 18:44:41.077768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.077804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.077942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.077976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.078900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.078933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.079882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.079920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.080033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.080071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.080251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.080307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.080443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.080477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.080674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.080728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.080910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.080964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.081101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.081154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.081288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.081322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.081437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.081472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.081638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.081688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.081853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.081891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.082892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.082926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.083894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.083936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.084947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.084981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.085911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.085944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.086839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.086872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.087884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.087918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.088935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.088985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.089148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.089204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.089341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.909 [2024-11-18 18:44:41.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.909 qpair failed and we were unable to recover it. 00:37:42.909 [2024-11-18 18:44:41.089477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.089511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.089673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.089709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.089847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.089881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.090823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.090876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.091074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.091284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.091432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.091602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.091797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.091950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.092305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.092458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.092627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.092818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.092870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.093819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.093999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.094242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.094392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.094551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.094736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.094941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.095094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.095133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.095285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.095323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.095494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.095531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.095690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.095725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.095864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.095916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.096926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.096963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.097126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.097164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.097321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.097360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.097515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.097552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.097678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.097712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.097817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.097851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.098933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.098985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.099137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.099370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.099521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.099707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.099841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.099974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.100356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.100531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.100690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.910 qpair failed and we were unable to recover it. 00:37:42.910 [2024-11-18 18:44:41.100841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.910 [2024-11-18 18:44:41.100876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.100981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.101382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.101568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.101775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.101946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.101979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.102169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.102350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.102702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.102862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.102995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.103029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.103158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.103383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.103421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.103602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.103643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.103805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.103852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.104897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.104931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.105128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.105319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.105532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.105701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.105857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.105998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.106051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.106226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.106263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.106447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.106481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.106625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.106664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.106818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.106982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.107344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.107508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.107668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.107863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.108900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.108934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.109942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.109976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.110100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.110268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.110301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.110433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.110467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.110649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.110683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.110859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.110895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.111032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.111071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.111224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.111258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.111441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.111478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.111627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.111825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.111859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.112913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.112955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.911 [2024-11-18 18:44:41.113094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.911 [2024-11-18 18:44:41.113127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.911 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.113266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.113299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.113461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.113494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.113627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.113661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.113821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.113854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.114947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.114990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.115158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.115192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.115295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.115329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.115451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.115485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.115690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.115724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.115895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.115945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.116960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.116997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.117148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.117329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.117490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.117861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.117999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.118194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.118231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.118414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.118447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.118627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.118696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.118869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.118933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.119104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.119140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.119330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.119386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.119567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.119745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.119779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.119941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.119979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.120121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.120159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.120314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.120347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.120495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.120550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.120769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.120906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.120940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.121081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.121116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.121296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.121448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.121497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.121649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.121685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.121862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.121897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.122039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.122073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.122262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.122299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.122475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.122513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.122695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.122730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.122922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.122961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.123150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.123188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.123407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.123518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.123569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.123722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.123920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.123953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.124119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.124178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.124321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.124358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.124520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.124554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.124697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.124759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.124928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.124967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.125107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.125146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.125284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.125340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.125492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.125530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.125685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.125719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.125895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.126067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.126100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.126246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.126279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.126441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.126492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.126634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.126688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.126852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.126886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.912 [2024-11-18 18:44:41.127057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.912 [2024-11-18 18:44:41.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.912 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.127232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.127269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.127396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.127447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.127628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.127679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.127804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.127838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.128878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.128926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.129107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.129141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.129306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.129379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.129532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.129724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.129758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.129900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.130109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.130144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.130286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.130319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.130484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.130535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.130743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.130778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.130902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.130936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.131082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.131118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.131286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.131323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.131507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.131541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.131694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.131729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.131863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.131906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.132928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.132961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.133089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.133123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.133349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.133463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.133500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.133632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.133666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.133813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.133847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.134927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.135152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.135190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.135385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.135418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.135560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.135618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.135782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.135816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.135959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.135994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.136157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.136320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.136505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.136703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.136846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.137171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.137377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.137546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.137740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.137928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.137965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.138127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.138160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.138312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.138380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.138574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.138632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.138797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.138832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.139010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.139048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.139187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.139224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.139384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.139419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.913 [2024-11-18 18:44:41.139553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.913 [2024-11-18 18:44:41.139625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.913 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.139807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.139841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.139973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.140113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.140283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.140444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.140693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.140861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.140924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.141082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.141116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.141233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.141268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.141460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.141498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.141683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.141718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.141858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.141893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.142200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.142570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.142948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.143825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.143985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.144151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.144341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.144511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.144689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.144849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.144883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.145945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.145994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.146141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.146175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.146302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.146355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.146525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.146703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.146738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.146872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.146927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.147110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.147353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.147518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.147706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.147983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.148033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.148212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.148264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.148446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.148483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.148600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.148672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.148776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.148809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.148972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.149136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.149315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.149526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.149711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.149884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.149970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.150115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.150175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.150367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.150407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.150521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.150559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.150745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.150794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.150966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.151003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.151162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.151219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.151443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.151499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.151641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.151675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.151854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.151906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.152190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.152263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.152469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.152509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.152666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.152702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.152852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.152889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.153007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.153044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.153176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.153214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.914 [2024-11-18 18:44:41.153421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.914 [2024-11-18 18:44:41.153491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.914 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.153758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.153794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.153897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.153941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.154094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.154146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.154271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.154324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.154434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.154469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.154644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.154792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.154845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.155934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.155970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.156088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.156123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.156287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.156321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.156454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.156487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.156638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.156673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.156821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.156876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.157876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.157921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.158084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.158235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.158647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.158826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.158996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.159164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.159336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.159504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.159684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.159847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.159884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.160942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.160996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.161125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.161166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.161354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.161393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.161542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.161575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.161728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.161764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.161873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.161914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.162039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.162076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.162292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.162497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.162534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.162669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.162703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.162837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.163079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.163282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.163466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.163687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.163848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.163995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.164049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.164191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.164428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.164462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.164624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.164677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.164841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.164875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.165137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.165281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.165432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.165596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.915 [2024-11-18 18:44:41.165776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.915 qpair failed and we were unable to recover it. 00:37:42.915 [2024-11-18 18:44:41.165955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.165993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.166175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.166212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.166352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.166526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.166563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.166702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.166736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.166839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.166872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.167098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.167447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.167655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.167822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.167954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.168006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.168195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.168232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.168431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.168478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.168660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.168695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.168832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.168865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.169940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.169977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.170108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.170160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.170281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.170319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.170460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.170498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.170693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.170732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.171086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.171138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.171351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.171389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.171529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.171566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.171721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.171759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.171870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.171910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.172924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.172963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.173311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.173470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.173812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.173995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.174224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.174394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.174936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.174974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.175092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.175144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.175305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.175338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.175451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.175484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.175668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.175721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.175895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.175963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.176929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.176963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.177124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.177158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.177258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.177292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.177392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.177426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.177539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.916 [2024-11-18 18:44:41.177573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:42.916 qpair failed and we were unable to recover it. 00:37:42.916 [2024-11-18 18:44:41.177766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.177804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.178169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.178517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.178697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.178850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.178897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.179931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.179972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.180126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.180328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.180362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.180482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.180518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.180677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.180718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.180839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.214 [2024-11-18 18:44:41.180876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.214 qpair failed and we were unable to recover it. 00:37:43.214 [2024-11-18 18:44:41.181037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.181085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.181232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.181291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.181477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.181515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.181680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.181724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.181842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.181878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.181989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.182164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.182300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.182474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.182667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.182901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.182944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.183202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.183241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.183391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.183444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.183557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.183601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.183780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.183831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.183988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.184148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.184371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.184533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.184743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.184903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.184938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.185097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.185149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.185305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.185368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.185507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.185552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.185720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.185775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.185976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.186016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.186304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.186355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.186545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.186579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.186723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.186779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.186938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.215 [2024-11-18 18:44:41.186975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.215 qpair failed and we were unable to recover it. 00:37:43.215 [2024-11-18 18:44:41.187134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.187193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.187410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.187443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.187604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.187648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.187754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.187788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.188024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.188062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.188212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.188280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.188461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.188494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.188622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.188675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.188921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.189146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.189184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.189328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.189379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.189506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.189539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.189703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.189754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.189934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.189971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.190107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.190145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.190297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.190335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.190461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.190499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.190719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.190911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.190970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.191182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.191229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.191418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.191455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.191565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.191617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.191795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.191987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.192174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.192329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.192684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.192869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.192916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.193067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.193120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.216 qpair failed and we were unable to recover it. 00:37:43.216 [2024-11-18 18:44:41.193287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.216 [2024-11-18 18:44:41.193341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.193478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.193514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.193713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.193831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.193866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.194923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.194961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.195078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.195116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.195229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.195266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.195427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.195468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.195653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.195690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.195842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.195882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.196079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.196479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.196655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.196857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.196989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.197026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.197216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.197427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.197602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.197661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.197828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.197985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.198019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.198118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.198171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.198380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.198450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.198628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.198662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.198787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.198821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.198966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.199025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.199410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.199550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.199596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.199744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.217 [2024-11-18 18:44:41.199780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.217 qpair failed and we were unable to recover it. 00:37:43.217 [2024-11-18 18:44:41.199951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.199985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.200228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.200489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.200548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.200734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.200770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.200923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.200961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.201192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.201230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.201356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.201407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.201603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.201670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.201775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.201819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.201960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.202163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.202324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.202362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.202548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.202583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.202731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.202941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.203101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.203157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.203331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.203369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.203508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.203546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.203715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.203750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.203852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.203912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.204081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.204150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.204278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.204331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.204439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.204477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.204661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.204697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.204821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.204870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.205940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.205974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.206114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.206149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.206388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.218 [2024-11-18 18:44:41.206448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.218 qpair failed and we were unable to recover it. 00:37:43.218 [2024-11-18 18:44:41.206612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.206667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.206799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.206834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.207014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.207050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.207294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.207370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.207523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.207561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.207757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.207793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.208008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.208062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.208261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.208302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.208493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.208553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.208751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.208787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.208914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.209114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.209166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.209377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.209500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.209536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.209676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.209726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.209905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.209941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.210139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.210291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.210471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.210656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.210818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.210953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.211137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.211173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.211334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.211390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.219 qpair failed and we were unable to recover it. 00:37:43.219 [2024-11-18 18:44:41.211528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.219 [2024-11-18 18:44:41.211563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.211716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.211769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.211990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.212044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.212230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.212295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.212429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.212464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.212602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.212797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.212833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.213166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.213360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.213555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.213731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.213884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.213922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.214068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.214252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.214479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.214649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.214847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.215203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.215368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.215518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.215893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.216956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.216991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.217167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.217279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.217317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.220 [2024-11-18 18:44:41.217438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.220 [2024-11-18 18:44:41.217476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.220 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.217598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.217659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.217774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.217808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.217913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.217947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.218097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.218134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.218316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.218370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.218556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.218790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.218825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.218974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.219181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.219516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.219714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.219999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.220134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.220188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.220342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.220381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.220528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.220566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.220750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.220800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.221037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.221176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.221230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.221383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.221441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.221614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.221650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.221794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.221832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.221983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.222178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.222348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.222491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.222679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.222894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.222929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.223039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.223211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.223247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.221 [2024-11-18 18:44:41.223464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.221 qpair failed and we were unable to recover it. 00:37:43.221 [2024-11-18 18:44:41.223573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.223625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.223727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.223761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.223910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.223963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.224110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.224163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.224293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.224328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.224468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.224502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.224664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.224714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.224854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.224891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.225105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.225308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.225448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.225656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.225831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.225974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.226136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.226513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.226723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.226872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.226915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.227133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.227200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.227387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.227441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.227600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.227642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.227791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.227826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.227934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.227968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.228129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.228163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.228304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.228384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.228572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.228637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.228816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.228864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.229155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.229222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.229502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.229561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.229720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.229755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.222 [2024-11-18 18:44:41.229919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.222 [2024-11-18 18:44:41.229953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.222 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.230077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.230115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.230280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.230345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.230472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.230506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.230643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.230809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.230857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.231929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.231968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.232164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.232202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.232340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.232378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.232504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.232541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.232742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.232896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.232951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.233126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.233200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.233445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.233504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.233667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.233701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.233801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.233834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.233958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.234110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.234161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.234361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.234395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.234579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.234627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.234767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.234800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.234950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.234988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.235168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.235206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.235385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.235424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.235548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.235586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.235758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.235792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.235955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.236028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.223 qpair failed and we were unable to recover it. 00:37:43.223 [2024-11-18 18:44:41.236289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.223 [2024-11-18 18:44:41.236347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.236498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.236534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.236654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.236709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.236812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.236846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.236993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.237188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.237390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.237575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.237736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.237947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.237985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.238168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.238316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.238516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.238823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.238966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.239161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.239320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.239488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.239686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.239825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.239859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.240034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.240089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.240270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.240487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.240540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.240749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.240783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.240919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.240952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.241930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.241967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.242076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.242110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.242251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.242285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.242484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.224 [2024-11-18 18:44:41.242519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.224 qpair failed and we were unable to recover it. 00:37:43.224 [2024-11-18 18:44:41.242652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.242711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.242878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.242938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.243118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.243153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.243332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.243370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.243524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.243562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.243726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.243761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.243972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.244037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.244234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.244296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.244477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.244515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.244691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.244859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.244914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.245102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.245242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.245276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.245412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.245446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.245618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.245653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.245819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.245853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.246076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.246302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.246490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.246690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.246856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.246996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.247031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.247229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.247263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.247403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.247437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.247564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.247627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.247773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.247821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.247998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.225 [2024-11-18 18:44:41.248037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.225 qpair failed and we were unable to recover it. 00:37:43.225 [2024-11-18 18:44:41.248239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.248294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.248491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.248530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.248661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.248696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.248822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.248856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.248984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.249019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.249178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.249212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.249374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.249424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.249616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.249672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.249814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.249850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.250036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.250075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.250308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.250385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.250581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.250626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.250744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.250922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.250957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.251091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.251126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.251271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.251461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.251498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.251665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.251699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.251833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.251867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.252030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.252068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.252220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.252254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.252388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.252439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.252620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.252685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.252820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.252854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.253006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.253043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.253202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.253240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.253419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.253454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.253636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.253696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.253857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.253891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.254075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.254240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.254426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.254692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.226 [2024-11-18 18:44:41.254847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.226 qpair failed and we were unable to recover it. 00:37:43.226 [2024-11-18 18:44:41.254973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.255127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.255300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.255536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.255742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.255908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.255957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.256103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.256140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.256270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.256305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.256472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.256526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.256730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.256766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.256909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.256953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.257090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.257236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.257435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.257595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.257837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.257981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.258185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.258336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.258560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.258751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.258925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.258961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.259072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.259106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.259245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.259300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.259455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.259493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.259665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.259702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.259867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.259927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.260053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.260089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.260296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.260433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.260470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.260584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.227 [2024-11-18 18:44:41.260628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.227 qpair failed and we were unable to recover it. 00:37:43.227 [2024-11-18 18:44:41.260804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.260838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.261836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.261869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.262790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.262839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.263023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.263064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.263207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.263268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.263384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.263430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.263599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.263653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.263808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.263856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.264905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.264952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.265125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.265291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.265331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.265469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.265523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.265734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.265871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.265912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.266016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.266050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.228 qpair failed and we were unable to recover it. 00:37:43.228 [2024-11-18 18:44:41.266202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.228 [2024-11-18 18:44:41.266258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.266443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.266482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.266617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.266664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.266827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.266861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.267027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.267068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.267191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.267244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.267384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.267431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.267713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.267909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.267959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.268095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.268152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.268303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.268357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.268519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.268554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.268746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.268795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.268929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.268969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.269224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.269280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.269535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.269598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.269755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.269790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.269935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.269971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.270103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.270143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.270417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.270477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.270654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.270690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.270801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.270836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.271028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.271107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.271291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.271355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.271485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.271723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.271757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.271904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.272075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.272132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.272321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.272576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.272624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.272744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.272778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.272931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.272986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.273143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.273183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.229 qpair failed and we were unable to recover it. 00:37:43.229 [2024-11-18 18:44:41.273305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.229 [2024-11-18 18:44:41.273350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.273503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.273546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.273696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.273730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.273873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.273909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.274043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.274077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.274248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.274303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.274437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.274493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.274665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.274846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.274896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.275146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.275213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.275406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.275461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.275601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.275644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.275754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.275789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.275916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.275955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.276151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.276278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.276420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.276458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.276636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.276691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.276822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.276871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.277033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.277089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.277264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.277303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.277476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.277515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.277707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.277919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.277970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.278135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.278175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.278367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.278427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.278589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.230 [2024-11-18 18:44:41.278633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.230 qpair failed and we were unable to recover it. 00:37:43.230 [2024-11-18 18:44:41.278780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.278815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.278944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.278981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.279194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.279345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.279516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.279660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.279825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.279976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.280013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.280259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.280298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.280439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.280477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.280663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.280699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.280830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.280880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.280992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.281160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.281366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.281541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.281708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.281900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.281937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.282850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.282885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.283909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.231 [2024-11-18 18:44:41.283959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.231 qpair failed and we were unable to recover it. 00:37:43.231 [2024-11-18 18:44:41.284072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.284110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.284274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.284309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.284448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.284483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.284645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.284681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.284899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.284954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.285101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.285169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.285439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.285495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.285673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.285713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.285887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.285941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.286078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.286113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.286281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.286317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.286458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.286494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.286665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.286716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.286866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.286921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.287874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.287909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.288153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.288215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.288409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.288697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.288904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.289074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.289129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.289280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.289321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.289482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.289522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.289640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.289831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.289866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.290085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.290123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.290275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.290314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.232 [2024-11-18 18:44:41.290491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.232 [2024-11-18 18:44:41.290528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.232 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.290708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.290758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.290889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.290939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.291118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.291177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.291374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.291435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.291562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.291598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.291724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.291760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.291942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.291982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.292233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.292293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.292454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.292494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.292635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.292690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.292828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.292863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.293013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.293050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.293261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.293323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.293468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.293518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.293695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.293730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.293843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.293878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.294058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.294093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.294255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.294293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.294470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.294509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.294672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.294839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.294891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.295052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.295119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.295334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.295376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.295554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.295594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.295746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.295789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.295918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.295965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.296082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.296131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.296498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.296714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.296752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.296942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.296982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.297125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.297180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.297341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.297404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.297537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.297576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.297732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.233 [2024-11-18 18:44:41.297768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.233 qpair failed and we were unable to recover it. 00:37:43.233 [2024-11-18 18:44:41.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.297952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.298159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.298200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.298404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.298442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.298621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.298674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.298800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.298835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.298953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.299003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.299232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.299298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.299461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.299520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.299685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.299720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.299824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.299859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.299993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.300155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.300342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.300524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.300722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.300934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.300975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.301134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.301188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.301377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.301432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.301573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.301615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.301808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.301949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.301985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.302123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.302160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.302272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.302307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.302418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.302457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.302579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.302789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.302824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.303013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.303063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.303272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.303310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.303555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.303618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.303766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.303813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.304040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.304093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.304291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.304345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.304522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.304558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.304709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.234 [2024-11-18 18:44:41.304746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.234 qpair failed and we were unable to recover it. 00:37:43.234 [2024-11-18 18:44:41.304937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.304990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.305277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.305335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.305512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.305550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.305733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.305770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.305915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.305958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.306108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.306150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.306262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.306298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.306456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.306494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.306637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.306693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.306835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.306874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.307090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.307128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.307240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.307278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.307393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.307436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.307623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.307674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.307800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.307849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.308040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.308092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.308402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.308461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.308674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.308710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.308860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.308914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.309051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.309088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.309285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.309322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.309474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.309514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.309697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.309747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.309876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.309926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.310095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.310156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.310349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.310402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.310560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.310596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.310809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.310986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.311041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.311306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.311367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.311477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.311523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.311692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.235 [2024-11-18 18:44:41.311727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.235 qpair failed and we were unable to recover it. 00:37:43.235 [2024-11-18 18:44:41.311857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.311906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.312918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.312972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.313128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.313366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.313551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.313787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.313838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.314032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.314074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.314263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.314304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.314435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.314481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.314689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.314739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.314866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.314905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.315017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.315052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.315275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.315347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.315532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.315571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.315731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.315789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.315913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.315957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.316135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.316199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.316319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.316357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.316508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.316554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.316784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.316834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.236 qpair failed and we were unable to recover it. 00:37:43.236 [2024-11-18 18:44:41.316985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.236 [2024-11-18 18:44:41.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.317183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.317245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.317437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.317490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.317659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.317706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.317834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.317874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.318934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.318984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.319130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.319169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.319332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.319381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.319553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.319590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.319723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.319772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.319947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.320000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.320130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.320183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.320348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.320408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.320592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.320667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.320829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.320883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.321944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.322169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.322223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.322377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.322420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.322585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.322628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.322782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.322816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.322950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.322987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.237 [2024-11-18 18:44:41.323208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.237 [2024-11-18 18:44:41.323246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.237 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.323564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.323726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.323923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.323958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.324146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.324189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.324395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.324453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.324581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.324626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.324769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.324803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.324986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.325025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.325281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.325438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.325475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.325623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.325674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.325800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.325835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.325996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.326157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.326346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.326530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.326690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.326873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.326921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.327935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.327976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.328155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.328193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.328366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.328403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.328546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.328595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.328762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.328799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.329016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.329083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.329265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.329332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.329462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.329501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.329667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.329701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.238 [2024-11-18 18:44:41.329825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.238 [2024-11-18 18:44:41.329889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.238 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.330121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.330181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.330431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.330489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.330633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.330692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.330798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.330832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.330982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.331213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.331276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.331457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.331513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.331681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.331719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.331857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.331911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.332070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.332109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.332272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.332318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.332470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.332512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.332743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.332888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.332928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.333067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.333102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.333347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.333408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.333596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.333725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.333760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.333885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.333925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.334152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.334190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.334311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.334349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.334507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.334568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.334731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.334780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.334940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.334989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.335126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.335167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.335342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.335410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.335551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.335586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.335755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.335804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.239 [2024-11-18 18:44:41.336013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.239 [2024-11-18 18:44:41.336072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.239 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.336214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.336313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.336459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.336498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.336622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.336674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.336825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.336859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.336964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.336998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.337136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.337170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.337307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.337345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.337457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.337494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.337689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.337846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.337895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.338058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.338277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.338317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.338445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.338481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.338631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.338677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.338836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.338890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.339053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.339100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.339386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.339630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.339687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.339809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.339846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.339961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.339994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.340187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.340353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.340498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.340674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.340813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.340992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.341890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.341992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.342044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.240 qpair failed and we were unable to recover it. 00:37:43.240 [2024-11-18 18:44:41.342172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.240 [2024-11-18 18:44:41.342210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.342392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.342505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.342543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.342716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.342752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.342868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.342920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.343095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.343141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.343287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.343326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.343480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.343518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.343683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.343717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.343826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.343861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.344846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.344896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.345870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.345922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.346052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.346106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.346222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.346259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.346426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.346481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.346677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.346717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.346837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.346873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.241 [2024-11-18 18:44:41.347051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.241 [2024-11-18 18:44:41.347090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.241 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.347262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.347302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.347488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.347533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.347758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.347806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.347976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.348141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.348336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.348545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.348763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.348933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.348969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.349946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.349992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.350931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.350966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.351941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.351977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.352136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.352169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.352294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.352377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.352527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.352571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.242 [2024-11-18 18:44:41.352749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.242 [2024-11-18 18:44:41.352786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.242 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.352895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.352949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.353139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.353323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.353375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.353547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.353586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.353754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.353789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.353909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.353945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.354086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.354120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.354242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.354277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.354406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.354441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.354630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.354672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.354819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.354873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.355921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.355961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.356127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.356170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.356271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.356321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.356443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.356653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.356688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.356828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.356862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.357959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.357993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.358962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.358995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.243 [2024-11-18 18:44:41.359130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.243 [2024-11-18 18:44:41.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.243 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.359391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.359640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.359675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.359795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.359835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.360152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.360215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.360453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.360493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.360661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.360698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.360835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.360888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.361915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.361950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.362093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.362132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.362285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.362324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.362476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.362522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.362714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.362763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.362889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.362925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.363036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.363086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.363286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.363323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.363472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.363510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.363655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.363710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.363843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.363878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.364133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.364301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.364464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.364652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.364983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.365163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.365337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.365487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.365681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.244 [2024-11-18 18:44:41.365857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.244 [2024-11-18 18:44:41.365891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.244 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.365995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.366183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.366350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.366529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.366711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.366860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.366913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.367932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.367966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.368080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.368114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.368273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.368460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.368641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.368694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.368857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.368891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.369921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.369957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.370070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.370106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.370226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.370263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.370454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.370523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.370668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.370717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.370862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.370918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.371041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.371080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.371244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.371282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.371433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.371471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.371618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.371674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.245 [2024-11-18 18:44:41.371809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.245 [2024-11-18 18:44:41.371844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.245 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.371956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.372191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.372470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.372639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.372808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.372942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.372976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.373081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.373115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.373314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.373370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.373535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.373575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.373760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.373810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.374111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.374171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.374337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.374548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.374588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.374738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.374774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.374884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.374935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.375078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.375132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.375298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.375362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.375496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.375534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.375695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.375731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.375827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.375861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.246 [2024-11-18 18:44:41.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.246 [2024-11-18 18:44:41.376912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.246 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.377958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.377997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.378168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.378205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.378324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.378363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.378513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.378562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.378707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.378742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.378846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.378880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.379010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.379047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.379290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.379357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.379509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.379547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.379712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.379746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.379852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.379888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.380037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.380090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.380280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.380349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.380556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.380593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.380740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.380775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.380925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.381104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.381170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.381287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.381325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.381460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.381493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.381641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.381678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.381806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.381841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.382056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.382247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.382356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.382395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.382562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.382625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.382817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.382866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.383000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.383043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.383224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.247 [2024-11-18 18:44:41.383278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.247 qpair failed and we were unable to recover it. 00:37:43.247 [2024-11-18 18:44:41.383423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.383457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.383595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.383638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.383740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.383774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.383907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.383942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.384901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.384943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.385889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.385956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.386107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.386145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.386315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.386356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.386525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.386561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.386680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.386717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.386847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.386883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.387043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.387216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.387271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.387497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.387687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.387736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.387871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.387919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.388054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.388095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.388242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.388418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.388455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.388581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.388625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.248 qpair failed and we were unable to recover it. 00:37:43.248 [2024-11-18 18:44:41.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.248 [2024-11-18 18:44:41.388773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.388931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.388985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.389190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.389237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.389423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.389489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.389658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.389694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.389799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.389833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.389966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.390132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.390321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.390584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.390761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.390946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.390980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.391083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.391135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.391304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.391342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.391549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.391587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.391741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.391789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.391906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.391963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.392117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.392152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.392377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.392437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.392587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.392656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.392768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.392802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.392907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.392942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.393070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.393108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.393308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.393345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.393475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.393515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.393652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.393703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.393868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.393901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.394002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.394036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.394144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.394179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.394303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.394337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.394467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.394520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.249 qpair failed and we were unable to recover it. 00:37:43.249 [2024-11-18 18:44:41.394684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.249 [2024-11-18 18:44:41.394720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.394858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.394892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.394993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.395157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.395347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.395514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.395907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.395946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.396076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.396130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.396290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.396376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.396573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.396618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.396773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.396809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.396972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.397207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.397380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.397528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.397765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.397939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.397977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.398093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.398132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.398305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.398359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.398485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.398534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.398721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.398869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.398929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.399236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.399430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.399470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.399618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.399654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.399803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.399839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.250 [2024-11-18 18:44:41.400017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.250 [2024-11-18 18:44:41.400056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.250 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.400218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.400271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.400435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.400488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.400615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.400671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.400806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.400841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.400987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.401133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.401345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.401541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.401731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.401949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.402139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.402297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.402488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.402680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.403850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.403995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.404851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.404982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.251 [2024-11-18 18:44:41.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.251 qpair failed and we were unable to recover it. 00:37:43.251 [2024-11-18 18:44:41.405187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.405221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.405366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.405404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.405542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.405579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.405713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.405747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.405991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.406850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.407132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.407299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.407503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.407705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.407931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.407970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.408119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.408159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.408314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.408354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.408531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.408567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.408685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.408733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.408892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.408932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.409095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.409129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.409265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.409437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.409475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.409612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.252 qpair failed and we were unable to recover it. 00:37:43.252 [2024-11-18 18:44:41.409781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.252 [2024-11-18 18:44:41.409815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.409946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.409984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.410937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.411102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.411228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.411395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.411589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.411802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.411976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.412198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.412342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.412526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.412725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.412872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.412926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.413907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.414107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.414146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.414277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.414312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.414482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.414672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.414708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.414831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.414864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.415000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.415053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.253 [2024-11-18 18:44:41.415197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.253 [2024-11-18 18:44:41.415235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.253 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.415421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.415559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.415593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.415704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.415863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.415912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.416056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.416091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.416240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.416306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.416449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.416486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.416618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.416652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.416814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.416848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.417973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.418086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.418120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.418336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.418394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.418542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.418581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.418713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.418747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.418850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.418883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.419075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.419315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.419484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.419664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.419841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.419972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.420157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.420355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.420541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.420686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.420872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.420910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.421085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.254 [2024-11-18 18:44:41.421143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.254 qpair failed and we were unable to recover it. 00:37:43.254 [2024-11-18 18:44:41.421251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.421289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.421400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.421437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.421697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.421857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.421911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.422960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.422998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.423141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.423179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.423332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.423372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.423562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.423600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.423728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.423874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.423909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.424063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.424116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.424300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.424501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.424536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.424655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.424692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.424857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.425001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.425063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.425284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.425344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.425465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.425502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.425632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.425667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.425791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.425839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.426003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.426058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.426292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.426360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.426495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.426535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.426707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.426743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.426859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.426895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.427029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.427070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.427227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.427263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.427458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.427498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.427656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.427710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.427841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.427909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.428040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.255 [2024-11-18 18:44:41.428080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.255 qpair failed and we were unable to recover it. 00:37:43.255 [2024-11-18 18:44:41.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.428345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.428484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.428522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.428723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.428921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.428959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.429167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.429323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.429387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.429568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.429698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.429852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.430096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.430134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.430320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.430358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.430505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.430542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.430698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.430748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.430884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.430933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.431052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.431088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.431284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.431342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.431540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.431577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.431747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.431785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.431946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.431986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.432205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.432244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.432426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.432482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.432631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.432683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.432791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.432826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.432939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.432990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.433149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.433218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.433404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.433532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.433569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.433755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.434408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.434574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.434792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.256 [2024-11-18 18:44:41.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.256 qpair failed and we were unable to recover it. 00:37:43.256 [2024-11-18 18:44:41.435091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.435233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.435381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.435528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.435688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.435861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.435915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.436057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.436096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.436292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.436330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.436494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.436547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.436692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.436726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.436858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.437015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.437050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.437211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.437250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.437499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.437538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.437693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.437729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.437845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.438017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.438051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.438204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.438242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.438386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.438441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.438627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.438667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.438795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.438844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.439041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.439080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.439279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.439318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.439483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.439522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.439690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.439725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.439831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.439865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.440041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.440079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.440284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.440341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.440533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.440583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.257 [2024-11-18 18:44:41.440712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.257 [2024-11-18 18:44:41.440760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.257 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.440936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.440996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.441134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.441205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.441368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.441407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.441600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.441664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.441776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.441811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.441916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.442140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.442316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.442369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.442516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.442567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.442710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.442746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.442933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.443133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.443323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.443535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.443707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.443866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.443992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.444031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.444215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.444274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.444420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.444458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.444602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.444662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.444795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.444844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.445931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.445967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.446143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.446181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.446297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.446336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.446542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.446703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.446752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.258 [2024-11-18 18:44:41.446938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.258 qpair failed and we were unable to recover it. 00:37:43.258 [2024-11-18 18:44:41.447087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.447143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.447255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.447295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.447442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.447487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.447660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.447709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.447830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.447868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.448902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.448937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.449890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.449943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.450151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.450362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.450511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.450703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.450859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.451941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.451996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.452149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.452187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.452315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.452542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.452694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.452731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.452859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.259 [2024-11-18 18:44:41.452913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.259 qpair failed and we were unable to recover it. 00:37:43.259 [2024-11-18 18:44:41.453083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.453139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.453315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.453372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.453510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.453544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.453668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.453707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.453827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.453861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.454065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.454330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.454493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.454641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.454804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.454994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.455052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.455202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.455261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.455460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.455519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.455638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.455677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.455792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.455827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.455984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.456021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.456222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.456313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.456467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.456673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.456709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.456826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.456994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.457029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.457136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.457170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.457346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.457384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.457542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.457597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.457760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.457814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.457994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.458061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.458221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.458284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.458472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.458511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.458678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.458714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.458850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.459091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.459145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.459331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.459394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.459499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.459534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.459690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.459742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.459881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.260 [2024-11-18 18:44:41.460049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.260 [2024-11-18 18:44:41.460084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.260 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.460240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.460292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.460411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.460576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.460621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.460809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.460863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.461891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.462071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.462123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.462248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.462285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.462438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.462475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.462674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.462812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.462846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.463051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.463481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.463662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.463828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.463965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.464162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.464343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.464520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.464733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.464904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.464938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.465082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.465133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.465260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.465297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.465503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.465547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.465708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.465757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.465867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.465902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.466017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.466156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.466190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.466376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.466413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.466579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.466623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.261 qpair failed and we were unable to recover it. 00:37:43.261 [2024-11-18 18:44:41.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.261 [2024-11-18 18:44:41.466762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.466907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.466961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.467151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.467194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.467330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.467371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.467554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.467594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.467771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.467918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.467972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.468126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.468166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.468324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.468364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.468487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.468534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.468699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.468736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.468877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.468914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.469047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.469083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.469303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.469361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.469517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.469553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.469675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.469711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.469835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.470002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.470040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.470194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.470234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.470434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.470475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.470630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.470684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.470793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.470830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.471035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.471089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.471250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.471312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.471529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.471690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.471727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.471841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.471887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.472037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.472092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.472267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.472307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.472452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.472675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.472725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.472884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.472941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.262 qpair failed and we were unable to recover it. 00:37:43.262 [2024-11-18 18:44:41.473097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.262 [2024-11-18 18:44:41.473152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.473342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.473414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.473553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.473603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.473744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.473781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.473923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.473966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.474154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.474195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.474437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.474680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.474853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.474889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.475053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.475108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.475279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.475343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.475478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.475533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.475687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.475724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.475857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.475897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.476102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.476309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.476491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.476687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.476826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.476968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.477142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.477372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.477527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.477750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.477977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.478111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.478165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.478287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.478325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.478501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.478540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.478705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.478741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.478888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.478923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.479149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.479206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.479358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.479397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.479549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.479589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.479766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.479801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.479907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.479960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.480073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.480112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.480241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.480294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.263 qpair failed and we were unable to recover it. 00:37:43.263 [2024-11-18 18:44:41.480439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.263 [2024-11-18 18:44:41.480477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.480633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.480718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.480915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.480984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.481177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.481233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.481374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.481534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.481569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.481692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.481729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.481831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.481865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.482956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.482989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.483159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.483336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.483477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.483629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.483857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.483983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.484186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.484353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.484517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.484699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.484852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.484897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.485065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.485211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.485391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.485620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.485813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.485961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.486145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.486372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.486542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.486714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.486901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.486939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.487113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.487296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.487459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.487605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.487989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.488026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.488183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.488221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.488349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.488569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.264 [2024-11-18 18:44:41.488759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.264 [2024-11-18 18:44:41.488793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.264 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.488908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.488942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.489081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.489133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.489283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.489320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.489530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.489567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.489733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.489768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.489925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.489975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.490171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.490213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.490367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.490407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.490595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.490777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.490812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.490950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.491119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.491157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.491342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.491380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.491541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.491598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.491805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.491853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.491974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.492142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.492319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.492501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.492658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.492813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.492848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.493884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.493989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.494165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.494326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.494534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.494835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.494896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.495075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.495110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.495218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.495252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.495362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.495397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.495520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.495558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.265 [2024-11-18 18:44:41.495713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.265 [2024-11-18 18:44:41.495749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.265 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.495897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.495931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.496081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.496132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.496279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.496511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.496556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.496744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.496793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.496944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.496980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.497201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.497395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.497433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.497588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.497631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.497797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.497846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.498072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.498113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.498293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.498496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.498533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.498731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.498900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.498949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.499116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.499175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.499336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.499394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.499529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.499565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.499700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.499735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.499913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.500051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.500257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.500442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.500591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.500813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.500979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.501035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.501170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.501210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.501381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.501444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.501598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.501684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.501830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.501907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.502177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.502322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.502381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.502529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.502567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.502707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.502742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.502912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.502966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.503134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.503174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.503309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.503371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.503538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.503578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.266 [2024-11-18 18:44:41.503719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.266 [2024-11-18 18:44:41.503754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.266 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.503864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.503898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.504032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.504244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.504283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.504480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.504546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.504694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.504743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.504870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.504920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.505072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.505111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.505276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.505342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.505547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.505586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.505732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.505767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.505900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.506111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.506150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.506265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.506303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.506490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.506528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.506687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.506723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.506875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.506910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.507065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.507111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.507275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.507394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.507441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.507601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.507650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.507807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.507882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.267 [2024-11-18 18:44:41.508860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.267 [2024-11-18 18:44:41.508917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.267 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.509072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.509258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.509402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.509581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.509815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.509967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.510822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.510968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.511875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.511995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.512186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.512345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.512495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.512694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.512919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.512998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.513175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.513375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.513426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.513540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.513595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.513774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.513820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.514024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.514061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.514248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.514294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.514455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.514509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.514658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.514727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.514897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.514952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.515122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.515301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.515456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.515646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.515829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.515977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.516016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.516143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.268 [2024-11-18 18:44:41.516178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.268 qpair failed and we were unable to recover it. 00:37:43.268 [2024-11-18 18:44:41.516285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.269 [2024-11-18 18:44:41.516320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.269 qpair failed and we were unable to recover it. 00:37:43.269 [2024-11-18 18:44:41.516430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.269 [2024-11-18 18:44:41.516466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.269 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.516695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.516815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.516856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.517838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.517968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.518004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.518113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.518150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.561 qpair failed and we were unable to recover it. 00:37:43.561 [2024-11-18 18:44:41.518261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.561 [2024-11-18 18:44:41.518297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.518428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.518462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.518568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.518602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.518748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.518785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.518910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.518949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.519089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.519127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.519315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.519375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.519499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.519534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.519701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.519761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.519926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.519980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.520132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.520176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.520343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.520404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.520531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.520566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.520693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.520730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.520943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.521942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.521991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.522153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.522295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.522470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.522655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.522831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.522974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.523027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.523184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.562 [2024-11-18 18:44:41.523237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.562 qpair failed and we were unable to recover it. 00:37:43.562 [2024-11-18 18:44:41.523405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.523441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.523572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.523633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.523789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.523847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.524041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.524557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.524807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.524971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.525162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.525362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.525498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.525677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.525875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.525913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.526085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.526123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.526268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.526307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.526474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.526510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.526665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.526714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.526878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.526948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.527086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.527128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.527277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.527338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.527486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.527525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.527718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.527768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.527971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.528025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.528154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.528219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.528375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.528431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.528532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.528565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.528748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.528804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.528961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.529185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.529379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.529552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.529743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.563 qpair failed and we were unable to recover it. 00:37:43.563 [2024-11-18 18:44:41.529931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.563 [2024-11-18 18:44:41.529973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.530103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.530142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.530310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.530368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.530509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.530658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.530693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.530828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.530862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.531089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.531278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.531478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.531658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.531969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.532008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.532238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.532303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.532420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.532458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.532594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.532665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.532812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.532847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.532999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.533151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.533321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.533517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.533772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.533937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.534075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.534110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.534312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.534366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.534510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.534557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.534750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.534860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.534912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.535050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.535085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.535219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.535269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.535396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.535440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.564 [2024-11-18 18:44:41.535577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.564 [2024-11-18 18:44:41.535620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.564 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.535750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.535786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.535963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.536116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.536287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.536503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.536736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.536908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.536945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.537154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.537316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.537380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.537533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.537578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.537725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.537774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.537953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.537994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.538133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.538196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.538351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.538549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.538588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.538728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.538764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.538950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.538999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.539160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.539214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.539362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.539411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.539568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.539614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.539743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.539778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.539878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.539930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.540038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.540076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.540254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.540292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.540431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.540645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.540681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.540790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.540845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.541058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.541249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.541451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.541620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.541777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.565 [2024-11-18 18:44:41.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.565 [2024-11-18 18:44:41.542002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.565 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.542162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.542224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.542350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.542384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.542487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.542521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.542655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.542737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.542898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.542945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.543103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.543163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.543344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.543408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.543575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.543641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.543793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.543843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.544039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.544100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.544269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.544326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.544467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.544504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.544701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.544751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.544886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.544936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.545148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.545214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.545351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.545392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.545552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.545588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.545736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.545773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.545943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.545997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.546141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.546197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.546345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.546383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.546527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.546565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.546737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.546772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.546957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.547197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.547408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.547585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.547761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.547941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.548081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.548135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.566 qpair failed and we were unable to recover it. 00:37:43.566 [2024-11-18 18:44:41.548259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.566 [2024-11-18 18:44:41.548298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.548432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.548487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.548680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.548715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.548909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.548965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.549149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.549305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.549360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.549535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.549717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.549771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.549894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.549964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.550116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.550156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.550337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.550377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.550556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.550704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.550754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.550901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.550939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.551162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.551347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.551403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.551538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.551591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.551907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.551956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.552100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.552157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.552367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.552425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.552580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.552627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.552810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.552859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.553022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.553078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.553233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.553295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.553433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.567 qpair failed and we were unable to recover it. 00:37:43.567 [2024-11-18 18:44:41.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.567 [2024-11-18 18:44:41.553643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.553789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.553823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.553937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.553973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.554078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.554132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.554318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.554375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.554506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.554541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.554679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.554714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.554840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.554892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.555165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.555371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.555550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.555709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.555875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.555985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.556152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.556307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.556512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.556712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.556863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.556898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.557033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.557071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.557276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.557315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.557493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.557548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.557702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.557738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.557855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.557890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.558049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.558087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.558257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.558315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.558466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.558504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.558692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.558737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.558848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.558881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.559010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.559054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.559176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.559212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.568 [2024-11-18 18:44:41.559337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.568 [2024-11-18 18:44:41.559375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.568 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.559534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.559696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.559732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.559903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.559959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.560102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.560161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.560358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.560480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.560519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.560650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.560685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.560844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.560878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.561047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.561220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.561286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.561397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.561446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.561583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.561647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.561846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.561894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.562923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.562962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.563865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.563901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.564043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.564098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.564345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.564389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.564564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.564600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.564785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.564919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.564957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.565142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.565273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.565311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.565483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.565533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.565681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.569 [2024-11-18 18:44:41.565719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.569 qpair failed and we were unable to recover it. 00:37:43.569 [2024-11-18 18:44:41.565872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.565926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.566088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.566146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.566278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.566334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.566476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.566525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.566666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.566703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.566843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.566877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.567064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.567126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.567332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.567479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.567516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.567700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.567750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.567931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.567986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.568175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.568218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.568335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.568385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.568546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.568582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.568722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.568768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.568899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.568938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.569164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.569224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.569385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.569422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.569533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.569569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.569726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.569764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.569983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.570017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.570201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.570239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.570388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.570440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.570625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.570692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.570858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.570924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.571063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.571117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.571263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.571317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.571451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.571500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.571677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.571864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.571905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.572085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.572144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.572329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.572391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.572502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.570 [2024-11-18 18:44:41.572554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.570 qpair failed and we were unable to recover it. 00:37:43.570 [2024-11-18 18:44:41.572672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.572708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.572918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.573152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.573295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.573354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.573548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.573582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.573722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.573794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.574013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.574077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.574367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.574588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.574654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.574806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.574842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.575038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.575075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.575244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.575284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.575434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.575475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.575647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.575684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.575880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.576128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.576189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.576384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.576423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.576592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.576781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.576816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.576933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.576969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.577110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.577298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.577516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.577727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.577766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.577886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.577924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.578085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.578149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.578360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.578420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.578531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.578569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.578707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.578742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.578851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.578885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.579008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.571 [2024-11-18 18:44:41.579182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.571 [2024-11-18 18:44:41.579236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.571 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.579395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.579432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.579542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.579580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.579756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.579792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.579947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.579999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.580176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.580215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.580425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.580465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.580663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.580844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.580879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.581050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.581085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.581192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.581244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.581467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.581621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.581656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.581799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.581834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.582919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.582955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.583092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.583128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.583393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.583431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.583777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.583811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.583981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.584019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.584275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.584334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.584445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.584483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.584622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.572 [2024-11-18 18:44:41.584669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.572 qpair failed and we were unable to recover it. 00:37:43.572 [2024-11-18 18:44:41.584780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.584816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.585084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.585448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.585596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.585803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.585993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.586178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.586348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.586589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.586738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.586929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.586973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.587183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.587244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.587391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.587430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.587574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.587630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.587792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.587841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.588914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.588948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.589137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.589176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.589356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.589551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.589585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.589753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.589788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.589954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.590018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.590168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.590227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.590422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.590460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.573 [2024-11-18 18:44:41.590620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.573 [2024-11-18 18:44:41.590676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.573 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.590831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.590877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.591001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.591053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.591221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.591258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.591425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.591463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.591626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.591687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.591828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.591873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.592876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.592913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.593072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.593110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.593258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.593296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.593548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.593587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.593730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.593764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.593877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.593930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.594194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.594251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.594551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.594619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.594787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.594821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.594970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.595131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.595408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.595586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.595760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.595910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.596085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.596121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.596272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.596311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.596486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.596525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.596675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.596712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.596881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.596930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.597188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.597373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.597413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.597564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.597603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-11-18 18:44:41.597780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.574 [2024-11-18 18:44:41.597815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.597981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.598020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.598131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.598170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.598364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.598435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.598621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.598677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.598808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.598856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.599017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.599086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.599276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.599331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.599437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.599472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.599588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.599632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.599822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.599883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.600043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.600085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.600272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.600337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.600463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.600498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.600663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.600719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.600926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.600980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.601176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.601244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.601454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.601494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.601664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.601700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.601886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.601936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.602143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.602197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.602359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.602400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.602553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.602593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.602773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.602808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.602954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.602993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.603261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.603300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.603499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.603599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.603755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.603789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.603996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.604064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.604308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.604374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.604548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.604586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.604798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.604940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.604974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.605078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.575 [2024-11-18 18:44:41.605131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-11-18 18:44:41.605325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.605385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.605552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.605602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.605770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.605819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.605959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.606028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.606233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.606271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.606527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.606613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.606781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.606815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.606986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.607024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.607322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.607381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.607541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.607581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.607740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.607774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.607915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.607966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.608141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.608200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.608354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.608390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.608539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.608717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.608760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.608971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.609951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.609990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.610149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.610305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.610500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.610679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.610820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.610997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.611052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.611183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.611228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.611366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.611402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.611566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.611601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.611741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.576 [2024-11-18 18:44:41.611779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-11-18 18:44:41.611957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.611997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.612163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.612197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.612356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.612393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.612519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.612556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.612717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.612771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.612907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.612945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.613943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.613978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.614900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.614937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.615897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.615933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.616042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.616078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.577 [2024-11-18 18:44:41.616211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.577 qpair failed and we were unable to recover it. 00:37:43.577 [2024-11-18 18:44:41.616337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.616479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.616519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.616669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.616725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.616893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.616932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.617108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.617163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.617304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.617347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.617474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.617510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.617620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.617796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.617831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.618023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.618080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.618218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.618256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.618428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.618631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.618671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.618776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.618811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.619026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.619086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.619203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.619242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.619355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.619393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.619556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.619599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.619793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.619843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.620058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.620112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.620350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.620449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.620619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.620675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.620771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.620805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.620916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.620955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.621120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.621175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.621345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.621383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.621500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.621538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.621674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.621708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.621818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.621852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.622041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.622079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.578 [2024-11-18 18:44:41.622231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-18 18:44:41.622270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.578 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.622445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.622665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.622700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.622801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.622835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.622964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.623014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.623175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.623235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.623403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.623459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.623588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.623651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.623794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.623830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.624036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.624092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.624321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.624361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.624475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.624698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.624732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.624859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.624904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.625941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.625993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.626132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.626202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.626431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.626491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.626679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.626729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.626892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.626934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.627118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.627158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.627305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.627361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.627533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.627582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.627740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.627777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.627967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.628023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.628199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.579 [2024-11-18 18:44:41.628262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.579 qpair failed and we were unable to recover it. 00:37:43.579 [2024-11-18 18:44:41.628449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.628507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.628618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.628664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.628780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.628814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.629021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.629061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.629288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.629358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.629571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.629750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.629785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.629955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.629993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.630125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.630180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.630325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.630379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.630546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.630596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.630760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.630976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.631029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.631294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.631357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.631514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.631553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.631730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.631766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.631918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.631968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.632186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.632246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.632367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.632406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.632578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.632623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.632784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.632819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.632980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.633041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.633241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.633304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.633484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.633523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.633697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.633732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.633939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.633994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.634199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.634300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.634529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.634589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.634750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.634932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.634982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.635205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.635526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.635585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.635737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.635773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.635894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.635932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.636137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.580 [2024-11-18 18:44:41.636175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.580 qpair failed and we were unable to recover it. 00:37:43.580 [2024-11-18 18:44:41.636295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.636333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.636499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.636533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.636651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.636686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.636806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.636855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.637875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.637929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.638083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.638121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.638251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.638305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.638464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.638505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.638707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.638757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.638898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.638947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.639167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.639225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.639390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.639451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.639617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.639654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.639823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.639980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.640032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.640195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.640292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.640559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.640626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.640769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.640804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.640963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.641001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.641305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.641367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.641556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.641739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.641780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.641935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.641974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.642155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.642213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.642381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.642419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.642585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.581 [2024-11-18 18:44:41.642646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.581 qpair failed and we were unable to recover it. 00:37:43.581 [2024-11-18 18:44:41.642760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.642795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.642948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.642987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.643114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.643166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.643337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.643376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.643490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.643528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.643659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.643820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.644114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.644185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.644388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.644449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.644663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.644771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.644806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.644952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.644986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.645090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.645144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.645370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.645409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.645564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.645599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.645745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.645779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.645956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.646202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.646396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.646580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.646778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.646953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.646987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.647117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.647151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.647300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.647338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.647518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.647656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.647722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.647906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.647955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.648174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.648228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.648388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.648427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.648545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.648583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.648718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.582 qpair failed and we were unable to recover it. 00:37:43.582 [2024-11-18 18:44:41.648870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.582 [2024-11-18 18:44:41.648925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.649115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.649149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.649351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.649389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.649514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.649557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.649748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.649804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.649966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.650017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.650247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.650307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.650528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.650583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.650756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.650791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.650926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.651115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.651152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.651368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.651408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.651582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.651625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.651781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.651831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.652001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.652042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.652278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.652338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.652494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.652533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.652722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.652757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.652916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.653190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.653264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.653438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.653493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.653711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.653866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.653901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.654079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.654312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.654412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.654568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.654602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.654724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.654760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.654880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.654919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.655122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.655193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.655383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.583 [2024-11-18 18:44:41.655443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.583 qpair failed and we were unable to recover it. 00:37:43.583 [2024-11-18 18:44:41.655591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.655633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.655797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.655836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.656900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.656937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.657948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.657986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.658131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.658172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.658340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.658377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.658541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.658733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.658891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.658925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.659109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.659181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.659330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.659396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.659559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.659622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.659725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.659770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.659923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.660091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.660125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.660340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.660378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.660552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.660590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.660769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.660819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.661009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.661080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.661285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.661344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.661451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.661489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.661671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.661728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.661889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.661946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.584 qpair failed and we were unable to recover it. 00:37:43.584 [2024-11-18 18:44:41.662099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.584 [2024-11-18 18:44:41.662155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.662296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.662358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.662488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.662534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.662674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.662724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.662852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.662901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.663905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.663940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.664125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.664398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.664554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.664721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.664859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.664969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.665188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.665378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.665555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.665737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.665918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.665980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.666179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.666336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.666376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.666499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.666534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.666672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.666707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.666853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.666908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.667050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.667084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.667242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.667276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.667388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.667425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.667566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.667779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.667829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.585 [2024-11-18 18:44:41.668059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.585 [2024-11-18 18:44:41.668101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.585 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.668313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.668352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.668470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.668518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.668697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.668733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.668880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.668939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.669166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.669219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.669379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.669437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.669570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.669612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.669777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.669831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.669996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.670040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.670184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.670256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.670521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.670581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.670781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.670816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.670953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.670991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.671191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.671229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.671423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.671585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.671648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.671796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.672067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.672213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.672252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.672512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.672573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.672723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.672759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.672943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.672995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.673220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.673255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.673425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.673580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.673621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.673768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.673947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.674220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.674378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.674613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.674766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.674956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.674994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.675162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.675200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.675426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.675465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.675633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.675686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.675789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.675824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.675995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.586 [2024-11-18 18:44:41.676050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.586 qpair failed and we were unable to recover it. 00:37:43.586 [2024-11-18 18:44:41.676305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.676496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.676531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.676673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.676708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.676885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.677273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.677340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.677504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.677541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.677740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.677774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.677911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.677946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.678051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.678102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.678273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.678311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.678536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.678573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.678771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.678822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.679945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.679997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.680156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.680196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.680322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.680538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.680594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.680780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.680830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.587 [2024-11-18 18:44:41.681898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.587 [2024-11-18 18:44:41.681952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.587 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.682160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.682306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.682345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.682529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.682567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.682733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.682775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.682917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.682959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.683153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.683349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.683502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.683700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.683876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.684012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.684137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.684172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.684325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.684375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.684554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.684619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.684766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.684816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.685047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.685249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.685448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.685660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.685995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.686029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.686188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.686226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.686428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.686466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.686575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.686622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.686809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.686859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.687024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.687074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.687263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.687329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.687448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.687505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.687643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.687679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.687856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.687911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.688132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.688208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.588 qpair failed and we were unable to recover it. 00:37:43.588 [2024-11-18 18:44:41.688376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.588 [2024-11-18 18:44:41.688438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.688597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.688639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.688772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.688806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.688966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.689035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.689298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.689359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.689664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.689700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.689845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.689882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.690147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.690218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.690470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.690529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.690670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.690704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.690858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.690911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.691101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.691320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.691481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.691676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.691845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.691984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.692022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.692165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.692203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.692354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.692392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.692575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.692624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.692778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.692823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.692983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.693018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.693169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.693208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.693347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.693385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.693603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.693763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.693812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.693966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.694148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.694384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.694549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.694926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.695327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.695502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.695702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.589 [2024-11-18 18:44:41.695851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.589 qpair failed and we were unable to recover it. 00:37:43.589 [2024-11-18 18:44:41.695953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.695988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.696110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.696147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.696277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.696330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.696480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.696518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.696676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.696864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.696905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.697046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.697084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.697252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.697293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.697492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.697532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.697710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.697746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.697884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.697920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.698183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.698244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.698394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.698433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.698603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.698766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.698800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.698909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.698944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.699979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.700268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.700536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.700592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.700783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.700818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.701009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.701071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.701330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.701390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.701563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.701601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.701765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.701904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.701940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.702054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.702089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.702236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.702498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.702557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.702717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.702753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.702890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.702944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.703062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.703114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.703257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.703292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.590 [2024-11-18 18:44:41.703487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.590 [2024-11-18 18:44:41.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.590 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.703658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.703694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.703843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.703893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.704954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.704988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.705155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.705207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.705365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.705400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.705546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.705583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.705771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.705821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.705962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.705999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.706168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.706239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.706382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.706553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.706589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.706748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.706952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.707153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.707322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.707526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.707722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.707892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.707945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.708128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.708296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.708330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.708479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.708514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.708706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.708756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.708876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.709100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.709491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.709656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.709805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.709981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.710015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.710198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.710235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.710501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.710538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.710708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.591 [2024-11-18 18:44:41.710744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.591 qpair failed and we were unable to recover it. 00:37:43.591 [2024-11-18 18:44:41.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.710935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.711102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.711138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.711272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.711308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.711479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.711534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.711714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.711751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.711859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.711894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.712038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.712461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.712684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.712861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.712996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.713167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.713374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.713538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.713719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.713897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.713934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.714943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.714977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.715145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.715324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.715513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.715551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.715716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.715751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.715915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.715949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.716090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.716124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.716298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.716332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.716477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.716511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.716671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.716721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.716904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.717085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.717284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.717510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.717691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.592 [2024-11-18 18:44:41.717858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.592 qpair failed and we were unable to recover it. 00:37:43.592 [2024-11-18 18:44:41.717991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.718026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.718199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.718237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.718384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.718423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.718617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.718652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.721821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.721879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.722054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.722095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.722267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.722305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.722451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.722503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.722682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.722719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.722860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.722901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.723045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.723081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.723296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.723352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.723493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.723532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.723678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.723714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.723846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.724081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.724262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.724457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.724678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.724887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.724997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.725032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.725173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.725220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.725346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.725384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.725567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.725640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.725850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.725971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.726124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.726317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.726511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.726700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.593 [2024-11-18 18:44:41.726837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.593 [2024-11-18 18:44:41.726870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.593 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.726979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.727152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.727320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.727514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.727745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.727920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.727958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.728141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.728225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.728507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.728565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.728736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.728771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.728880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.728933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.729082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.729285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.729426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.729696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.729870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.729985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.730169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.730206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.730368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.730415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.730588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.730676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.730833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.730869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.731004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.731039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.731200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.731235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.731501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.731539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.731750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.731785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.731914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.731963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.732161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.732218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.732347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.732384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.732486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.732521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.732702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.732752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.732927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.732965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.733206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.733247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.733473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.733693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.733729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.733866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.733901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.734125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.734187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.734352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.734490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.734526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.594 qpair failed and we were unable to recover it. 00:37:43.594 [2024-11-18 18:44:41.734728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.594 [2024-11-18 18:44:41.734765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.734902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.734937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.735153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.735362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.735425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.735581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.735624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.735766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.735801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.735956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.735995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.736124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.736173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.736325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.736364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.736542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.736581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.736742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.736777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.736875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.736908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.737319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.737484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.737691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.737862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.737970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.738004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.738126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.738164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.738322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.738357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.738536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.738575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.738768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.738823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.738974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.739013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.739192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.739233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.739357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.739399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.739589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.739633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.739777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.739813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.739962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.740121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.740292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.740495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.740701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.740875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.740910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.741094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.741132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.741286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.741321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.741442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.741477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.741628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.741667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.595 qpair failed and we were unable to recover it. 00:37:43.595 [2024-11-18 18:44:41.741841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.595 [2024-11-18 18:44:41.741876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.742093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.742318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.742502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.742667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.742821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.742983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.743927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.743962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.744144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.744327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.744505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.744677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.744853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.745139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.745932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.745971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.746139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.746307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.746480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.746627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.746803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.746991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.747192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.747386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.747589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.747773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.747946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.747981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.748232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.748294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.748486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.748521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.748685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.596 [2024-11-18 18:44:41.748739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.596 qpair failed and we were unable to recover it. 00:37:43.596 [2024-11-18 18:44:41.748900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.748941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.749956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.749995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.750130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.750166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.750270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.750305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.750446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.750483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.750652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.750688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.750820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.750855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.751101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.751160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.751347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.751382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.751484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.751519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.751676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.751715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.751872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.751908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.752056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.752266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.752466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.752668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.752835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.752998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.753034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.753212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.753368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.753409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.753641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.753862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.753917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.754096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.754315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.754466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.754640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.754817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.754992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.755031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.597 qpair failed and we were unable to recover it. 00:37:43.597 [2024-11-18 18:44:41.755212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.597 [2024-11-18 18:44:41.755250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.755407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.755442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.755578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.755646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.755819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.755857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.755991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.756026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.756165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.756219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.756394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.756434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.756625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.756661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.756775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.756829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.756984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.757175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.757363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.757550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.757763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.757904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.757939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.758122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.758302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.758477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.758660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.758845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.758977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.759029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.759205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.759242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.759367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.759402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.759534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.759568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.759775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.759815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.760869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.760902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.761922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.761978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.762178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.598 [2024-11-18 18:44:41.762217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.598 qpair failed and we were unable to recover it. 00:37:43.598 [2024-11-18 18:44:41.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.762381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.762563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.762728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.762763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.762895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.762948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.763090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.763128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.763263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.763298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.763401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.763436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.763632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.763674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.763858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.763894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.764059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.764098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.764235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.764274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.764461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.764498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.764684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.764724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.764875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.764915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.765074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.765108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.765267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.765319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.765425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.765463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.765596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.765636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.765774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.765809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.766960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.766995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.767146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.767312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.767461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.767637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.767825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.767979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.768018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.768220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.768449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.768485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.768633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.768669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.768791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.768830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.769001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.769042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.769187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.769226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.599 [2024-11-18 18:44:41.769371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.599 [2024-11-18 18:44:41.769412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.599 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.769555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.769589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.769797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.769837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.769982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.770213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.770347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.770577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.770797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.770930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.770964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.771120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.771158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.771325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.771360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.771538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.771575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.771740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.771781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.771942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.771978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.772114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.772150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.772283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.772318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.772464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.772518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.772674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.772710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.772811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.772844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.773037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.773216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.773408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.773596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.773788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.773963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.774137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.774345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.774540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.774749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.774886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.774920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.775108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.775146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.775290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.775328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.775453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.775493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.775656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.775693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.775824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.775874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.776030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.776086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.776192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.776226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.776355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.600 [2024-11-18 18:44:41.776390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.600 qpair failed and we were unable to recover it. 00:37:43.600 [2024-11-18 18:44:41.776549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.776590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.776757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.776810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.776981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.777135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.777330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.777471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.777668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.777843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.777989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.778153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.778330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.778496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.778671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.778890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.779128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.779182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.779328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.779379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.779491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.779527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.779632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.779666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.779775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.779829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.780902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.780956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.781133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.781186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.781310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.781362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.781483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.781533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.781677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.781727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.781842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.781896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.782095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.782194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.782426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.782465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.782622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.782676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.782835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.782885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.783044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.783099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.783286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.783355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.783503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.783542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.783691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.601 [2024-11-18 18:44:41.783727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.601 qpair failed and we were unable to recover it. 00:37:43.601 [2024-11-18 18:44:41.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.783868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.784019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.784058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.784287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.784331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.784474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.784513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.784720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.784770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.784958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.785008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.785177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.785236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.785385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.785438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.785626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.785685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.785832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.786033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.786072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.786211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.786249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.786484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.786553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.786754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.786789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.786942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.786979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.787193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.787254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.787439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.787478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.787666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.787701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.787813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.787849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.788060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.788098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.788244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.788281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.788438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.788489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.788619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.788653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.788815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.788866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.789054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.789251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.789442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.789675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.789849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.789981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.790021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.790171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.790209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.790331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.790368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.790546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.602 [2024-11-18 18:44:41.790584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.602 qpair failed and we were unable to recover it. 00:37:43.602 [2024-11-18 18:44:41.790753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.790788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.790939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.790976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.791174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.791211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.791364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.791401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.791567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.791736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.791771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.791904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.791938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.792075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.792108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.792269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.792307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.792430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.792472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.792668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.792703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.792834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.792869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.793852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.793886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.794905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.794940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.795068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.795101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.795263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.795300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.795498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.795535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.795709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.795851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.795886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.796944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.796982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.797126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.797163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.797282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.797320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.603 [2024-11-18 18:44:41.797506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.603 [2024-11-18 18:44:41.797544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.603 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.797729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.797764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.797895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.797929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.798054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.798088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.798241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.798280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.798391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.798593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.798675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.798844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.799089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.799145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.799307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.799365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.799528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.799563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.799741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.799891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.799930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.800086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.800124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.800271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.800310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.800491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.800529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.800716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.800767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.801150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.801191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.801508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.801628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.801680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.801829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.801879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.802953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.802995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.803209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.803248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.803425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.803464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.803622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.803677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.803806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.803856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.804042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.804097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.804328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.804370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.804560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.804707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.804743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.804902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.804941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.805049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.805102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.604 qpair failed and we were unable to recover it. 00:37:43.604 [2024-11-18 18:44:41.805324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.604 [2024-11-18 18:44:41.805385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.805556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.805600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.805768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.805819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.805983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.806229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.806291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.806512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.806568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.806743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.806779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.807113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.807285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.807476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.807655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.807853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.807990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.808163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.808341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.808510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.808670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.808837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.808870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.809005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.809176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.809209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.809599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.809806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.809845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.810860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.810903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.811069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.811274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.811332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.811487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.811522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.811652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.811702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.811841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.812028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.812081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.812263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.812316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.812429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.605 [2024-11-18 18:44:41.812466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.605 qpair failed and we were unable to recover it. 00:37:43.605 [2024-11-18 18:44:41.812598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.812640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.812803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.812856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.813031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.813226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.813297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.813461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.813618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.813658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.813825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.813874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.814930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.814980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.815130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.815166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.815334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.815369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.815507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.815542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.815728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.815779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.815929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.815970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.816192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.816252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.816441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.816501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.816617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.816671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.816805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.816840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.816948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.817000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.817140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.817180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.817379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.817477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.817667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.817703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.817805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.817841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.818006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.818041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.818190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.818228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.818390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.818429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.818591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.818635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.818793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.819029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.819072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.819326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.819384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.819559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.819597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.819745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.819780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.819907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.819957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.820167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.820384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.820445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.820569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.606 [2024-11-18 18:44:41.820629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.606 qpair failed and we were unable to recover it. 00:37:43.606 [2024-11-18 18:44:41.820746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.820781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.820935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.821138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.821198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.821334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.821397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.821531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.821566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.821694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.821731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.821835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.821869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.822065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.822211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.822435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.822638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.822858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.822991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.823204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.823385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.823541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.823712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.823940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.823995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.824162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.824217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.824370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.824536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.824571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.824687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.824723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.824836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.824873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.825135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.825199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.825403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.825462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.825603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.825652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.825775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.825813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.825953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.825990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.826136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.826210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.826335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.826373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.826533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.826570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.826719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.826868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.826904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.827022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.827058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.827170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.827205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.827333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.827368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.607 [2024-11-18 18:44:41.827497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.607 [2024-11-18 18:44:41.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.607 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.827712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.827818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.827854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.827956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.827990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.828912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.828947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.829941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.829993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.830145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.830183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.830302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.830339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.830456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.830494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.830646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.830699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.830814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.830848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.831921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.831955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.832304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.832493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.832694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.832839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.832983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.833195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.833376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.833544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.833737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.833907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.833959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.834125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.834192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.834368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.834425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.834595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.608 [2024-11-18 18:44:41.834639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.608 qpair failed and we were unable to recover it. 00:37:43.608 [2024-11-18 18:44:41.834748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.834784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.834924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.835125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.835274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.835459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.835660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.835831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.835992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.836033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.836168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.836223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.836404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.836590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.836638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.836781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.836817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.836997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.837066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.837258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.837314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.837474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.837638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.837703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.837820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.837854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.838102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.838165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.838392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.838538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.838576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.838745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.838782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.838929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.838965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.839074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.839128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.839277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.839337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.839515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.839683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.839719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.839837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.839883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.840037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.840241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.840460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.840991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.841204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.841387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.841619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.841779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.841947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.841984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.842096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.842148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.842289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.842328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.842440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.842479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.842641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.842677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.842807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.842842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.609 [2024-11-18 18:44:41.843007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.609 [2024-11-18 18:44:41.843046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.609 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.843157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.843196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.843368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.843407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.843558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.843602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.843736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.843771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.843945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.843999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.844134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.844171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.844306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.844359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.844478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.844516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.844683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.844719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.844859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.844894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.845967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.846120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.846158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.846297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.846336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.846452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.846491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.846614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.846668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.846834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.846870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.847936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.847971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.848120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.848159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.848292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.848331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.848493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.848532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.848698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.848735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.848844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.848880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.849879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.849914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.610 [2024-11-18 18:44:41.850955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.610 [2024-11-18 18:44:41.850995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.610 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.851143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.851325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.851462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.851684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.851852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.851990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.852159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.852370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.852549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.852764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.852909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.852944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.853076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.853115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.853262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.853300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.853450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.853486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.853638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.853703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.853860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.853894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.854052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.854259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.854441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.854641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.854842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.854968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.855136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.855334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.855484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.855683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.855847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.611 [2024-11-18 18:44:41.855882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.611 qpair failed and we were unable to recover it. 00:37:43.611 [2024-11-18 18:44:41.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.856230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.856425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.856575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.856728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.856877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.856930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.857069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.857104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.857210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.857244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.857435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.857471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.857604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.857645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.857767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.857802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.858913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.858951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.859107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.859147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.859289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.859325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.859556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.859590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.859811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.859848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.859982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.860180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.860499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.860702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.860871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.860906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.861012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.861064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.861189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.861229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.861381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.861420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.861584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.861626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.861828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.862042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.862252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.862398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.862582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.612 [2024-11-18 18:44:41.862779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.612 qpair failed and we were unable to recover it. 00:37:43.612 [2024-11-18 18:44:41.862896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.862932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.613 qpair failed and we were unable to recover it. 00:37:43.613 [2024-11-18 18:44:41.863795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.613 [2024-11-18 18:44:41.863829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.896 [2024-11-18 18:44:41.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.896 [2024-11-18 18:44:41.864100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.896 [2024-11-18 18:44:41.864216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.896 [2024-11-18 18:44:41.864254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.896 [2024-11-18 18:44:41.864401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.896 [2024-11-18 18:44:41.864436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.896 [2024-11-18 18:44:41.864544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.896 [2024-11-18 18:44:41.864579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.896 [2024-11-18 18:44:41.864744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.896 [2024-11-18 18:44:41.864779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.896 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.864887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.864921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.865866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.865980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.866151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.866400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.866568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.866726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.866869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.866904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.867830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.867984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.868019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.868126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.868360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.868472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.868508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.868574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:43.897 [2024-11-18 18:44:41.868781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.868832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.868993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.869044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.869178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.869239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.869405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.869461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.869630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.869667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.869776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.869812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.869984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.897 [2024-11-18 18:44:41.870993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.897 qpair failed and we were unable to recover it. 00:37:43.897 [2024-11-18 18:44:41.871138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.871177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.871326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.871493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.871531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.871683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.871718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.871835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.871870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.872039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.872074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.872209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.872244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.872358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.872396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.872549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.872585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.872793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.872843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.873027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.873081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.873313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.873391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.873536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.873575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.873811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.873862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.874071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.874257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.874486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.874666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.874842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.874950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.875135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.875347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.875530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.875761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.875910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.875957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.876068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.876105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.876268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.876306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.876477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.876516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.876703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.876738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.876875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.876909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.877931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.877984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.878093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.878131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.898 [2024-11-18 18:44:41.878241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.898 [2024-11-18 18:44:41.878279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.898 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.878410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.878450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.878613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.878649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.878783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.878818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.878978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.879017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.879207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.879246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.879424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.879463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.879641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.879694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.879808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.879842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.879989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.880024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.880215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.880254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.880396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.880451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.880630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.880784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.880820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.880982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.881162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.881352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.881543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.881754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.881957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.881996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.882137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.882176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.882293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.882331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.882484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.882530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.882685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.882721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.882854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.882908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.883939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.883994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.884143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.884182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.884369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.884404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.884536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.884590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.884757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.884792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.884902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.884936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.885102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.885309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.885348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.899 qpair failed and we were unable to recover it. 00:37:43.899 [2024-11-18 18:44:41.885474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.899 [2024-11-18 18:44:41.885508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.885636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.885670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.885776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.885811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.885991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.886210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.886523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.886924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.886978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.887932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.887988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.888207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.888264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.888394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.888429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.888542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.888577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.888770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.888824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.888951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.889147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.889339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.889521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.889755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.889913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.889953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.890088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.890127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.890245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.890282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.890431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.890469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.890627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.890663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.890837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.890891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.891034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.891090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.891255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.891313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.891442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.891482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.891631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.891687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.891830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.891865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.892016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.892070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.892191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.892229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.892356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.900 [2024-11-18 18:44:41.892394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.900 qpair failed and we were unable to recover it. 00:37:43.900 [2024-11-18 18:44:41.892564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.892599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.892744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.892778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.892905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.892938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.893084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.893141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.893285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.893320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.893456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.893632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.893667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.893798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.893832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.894000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.894052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.894257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.894294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.894480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.894520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.894714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.894749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.894896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.895917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.895955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.896072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.896110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.896266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.896312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.896486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.896536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.896669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.896707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.896916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.896972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.897143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.897232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.897490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.897659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.897701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.897843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.897879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.898016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.898051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.898153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.898188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.898364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.898402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.901 [2024-11-18 18:44:41.898522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.901 [2024-11-18 18:44:41.898563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.901 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.898729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.898764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.898869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.898922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.899097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.899135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.899319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.899441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.899479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.899635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.899688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.899810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.899860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.900048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.900104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.900264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.900308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.900477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.900518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.900691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.900728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.900864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.900900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.901885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.901920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.902029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.902064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.902252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.902290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.902494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.902533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.902694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.902730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.902863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.902914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.903071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.903106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.903243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.903412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.903454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.903621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.903659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.903909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.904102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.904159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.904293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.904477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.904515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.904670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.904706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.904864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.904899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.905004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.905055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.905172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.905217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.905445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.905614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.905668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.902 [2024-11-18 18:44:41.905803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.902 [2024-11-18 18:44:41.905838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.902 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.906901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.906956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.907151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.907188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.907381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.907441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.907625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.907678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.907792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.907834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.908031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.908308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.908368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.908542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.908580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.908767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.908816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.909965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.909999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.910139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.910316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.910369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.910541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.910596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.910795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.910845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.911095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.911288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.911488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.911668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.911818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.911970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.912010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.912208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.912278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.912433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.912471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.912624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.912677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.912895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.912932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.913376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.913437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.913575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.913622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.903 qpair failed and we were unable to recover it. 00:37:43.903 [2024-11-18 18:44:41.913767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.903 [2024-11-18 18:44:41.913802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.913981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.914036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.914201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.914269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.914419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.914478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.914657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.914711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.914839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.914875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.915934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.915973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.916109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.916155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.916319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.916375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.916506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.916560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.916716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.916752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.916865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.916900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.917917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.917952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.918862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.918915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.919833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.919868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.920045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.920190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.920226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.920403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.904 [2024-11-18 18:44:41.920442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.904 qpair failed and we were unable to recover it. 00:37:43.904 [2024-11-18 18:44:41.920577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.920618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.920729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.920769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.920946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.920984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.921205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.921345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.921491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.921525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.921688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.921724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.921882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.921932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.922937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.922974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.923209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.923415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.923451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.923559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.923595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.923808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.923843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.923974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.924009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.924123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.924176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.924434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.924494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.924686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.924722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.924874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.924913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.925091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.925131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.925352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.925388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.925511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.925551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.925738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.925907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.925943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.926099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.926168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.926455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.926513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.926716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.926863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.926899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.905 qpair failed and we were unable to recover it. 00:37:43.905 [2024-11-18 18:44:41.927931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.905 [2024-11-18 18:44:41.927971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.928111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.928150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.928335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.928451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.928488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.928697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.928744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.928924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.928960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.929116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.929154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.929362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.929425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.929660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.929788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.929826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.929944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.929984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.930104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.930139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.930287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.930323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.930491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.930544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.930711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.930746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.930880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.930933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.931052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.931091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.931243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.931277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.931466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.931505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.931886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.931920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.932095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.932286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.932673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.932833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.932969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.933005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.933143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.933178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.933419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.933583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.933765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.933800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.906 [2024-11-18 18:44:41.933961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.906 [2024-11-18 18:44:41.934015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.906 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.934147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.934322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.934357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.934523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.934558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.934672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.934709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.934908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.935241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.935303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.935501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.935537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.935682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.935722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.935870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.935908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.936067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.936103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.936264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.936299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.936468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.936507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.936640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.936681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.936863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.936903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.937940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.937995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.938129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.938169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.938329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.938364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.938474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.938511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.938674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.938713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.938931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.938966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.939181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.939240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.939403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.939442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.939623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.939659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.939810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.939849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.940236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.940399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.940790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.940959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.941010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.941136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.941191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.941355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.941390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.907 [2024-11-18 18:44:41.941524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.907 [2024-11-18 18:44:41.941578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.907 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.941786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.941953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.942003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.942227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.942290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.942472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.942535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.942677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.942725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.942884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.942934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.943094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.943136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.943324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.943364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.943533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.943569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.943715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.943751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.943949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.944175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.944463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.944501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.944710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.944849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.944886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.945069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.945120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.945316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.945355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.945593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.945658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.945819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.945854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.946125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.946163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.946282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.946336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.946509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.946547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.946739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.946790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.946937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.946975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.947143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.947183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.947346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.947525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.947563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.947698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.947734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.947886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.947922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.948092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.948151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.948417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.948476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.948634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.948685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.948796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.948830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.948982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.949031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.949143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.949322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.949357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.949468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.949520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.908 qpair failed and we were unable to recover it. 00:37:43.908 [2024-11-18 18:44:41.949707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.908 [2024-11-18 18:44:41.949747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.949901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.949935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.950085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.950124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.950294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.950332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.950465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.950508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.950666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.950716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.950884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.950926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.951928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.951966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.952178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.952357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.952569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.952636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.952771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.952810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.952945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.952981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.953298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.953425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.953459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.953622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.953657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.953804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.953839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.953955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.953989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.954103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.954137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.954298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.954336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.954521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.954633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.954668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.954825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.954875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.955085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.955123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.955314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.955354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.955503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.955555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.955708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.955744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.955879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.955913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.956047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.956219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.956427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.956576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.909 [2024-11-18 18:44:41.956755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.909 qpair failed and we were unable to recover it. 00:37:43.909 [2024-11-18 18:44:41.956891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.956926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.957114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.957164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.957294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.957329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.957465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.957518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.957693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.957732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.957881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.957916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.958956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.958991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.959101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.959137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.959319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.959359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.959519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.959553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.959695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.959732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.959946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.960086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.960121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.960261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.960296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.960450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.960489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.960619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.960655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.960788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.960823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.961834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.961884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.962091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.962146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.962309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.962347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.962488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.962543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.962738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.962774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.962935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.962969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.963131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.910 [2024-11-18 18:44:41.963169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.910 qpair failed and we were unable to recover it. 00:37:43.910 [2024-11-18 18:44:41.963345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.963383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.963550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.963586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.963777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.963917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.963967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.964156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.964195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.964387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.964426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.964578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.964624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.964809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.964844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.964954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.964989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.965236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.965296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.965459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.965495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.965648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.965688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.965848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.965908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.966089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.966125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.966333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.966401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.966577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.966624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.966753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.966788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.966925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.966976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.967197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.967395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.967430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.967615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.967665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.967811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.967849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.968850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.968885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.969886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.969955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.970162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.970302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.970360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.970539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.970593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.911 [2024-11-18 18:44:41.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.911 [2024-11-18 18:44:41.970747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.911 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.970886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.970922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.971094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.971269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.971460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.971671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.971867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.971993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.972161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.972366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.972584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.972791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.972962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.972997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.973102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.973153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.973339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.973374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.973484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.973523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.973651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.973686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.973835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.973905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.974096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.974272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.974479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.974653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.974816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.974995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.975034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.975196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.975231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.975358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.975393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.975561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.975601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.975768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.975805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.975999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.976055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.976250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.976289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.976478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.976514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.976671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.976709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.976898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.976933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.912 qpair failed and we were unable to recover it. 00:37:43.912 [2024-11-18 18:44:41.977873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.912 [2024-11-18 18:44:41.977958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.978146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.978338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.978533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.978572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.978778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.978813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.978998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.979209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.979413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.979601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.979778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.979950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.979985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.980116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.980169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.980279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.980317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.980472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.980506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.980627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.980694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.980856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.980898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.981032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.981066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.981225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.981284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.981434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.981472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.981604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.981651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.981795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.981831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.982016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.982054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.982207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.982242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.982381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.982435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.982603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.982682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.982826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.982864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.983022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.983096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.983342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.983403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.983565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.983601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.983745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.983779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.983923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.983962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.984131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.984167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.984325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.984363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.984514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.984679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.984715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.984844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.984883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.985069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.985122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.985303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.985339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.913 qpair failed and we were unable to recover it. 00:37:43.913 [2024-11-18 18:44:41.985470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.913 [2024-11-18 18:44:41.985523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.985694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.985743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.985892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.985930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.986075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.986111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.986317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.986381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.986541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.986577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.986711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.986748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.986917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.986954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.987914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.987951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.988877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.988920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.989089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.989258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.989438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.989582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.989823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.989981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.990798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.990969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.991148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.991296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.991719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.991896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.991931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.992047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.914 [2024-11-18 18:44:41.992085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.914 qpair failed and we were unable to recover it. 00:37:43.914 [2024-11-18 18:44:41.992244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.992279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.992448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.992602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.992665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.992773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.992807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.992914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.992949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.993099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.993137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.993294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.993329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.993463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.993497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.993645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.993699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.993838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.994926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.994962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.995107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.995330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.995371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.995503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.995699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.995753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.995928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.995966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.996115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.996154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.996292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.996343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.996493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.996533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.996719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.996755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.996891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.996925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.997093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.997234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.997371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.997623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.997835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.997967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.998006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.998165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.915 [2024-11-18 18:44:41.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.915 qpair failed and we were unable to recover it. 00:37:43.915 [2024-11-18 18:44:41.998412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.998447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.998582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.998627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.998807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.998856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.999075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.999264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.999445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.999665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:41.999838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:41.999985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.000203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.000374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.000512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.000693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.000882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.000919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.001094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.001134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.001292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.001328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.001482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.001520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.001671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.001715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.001938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.002111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.002150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.002341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.002377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.002488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.002523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.002709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.002874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.002910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.003081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.003256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.003444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.003638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.003835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.003976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.004319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.004526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.004951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.005088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.005123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.005321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.005356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.005542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.916 [2024-11-18 18:44:42.005593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.916 qpair failed and we were unable to recover it. 00:37:43.916 [2024-11-18 18:44:42.005802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.005872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.006044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.006246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.006286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.006466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.006506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.006701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.006738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.006843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.006898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.007083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.007123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.007264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.007299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.007453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.007501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.007672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.007728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.007857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.007895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.008071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.008242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.008417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.008769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.008965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.009140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.009386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.009620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.009793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.009958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.010133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.010168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.010362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.010423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.010631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.010786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.010822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.011023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.011115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.011295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.011356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.011516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.011551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.011721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.011757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.011914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.011961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.012144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.012180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.012353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.012413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.012537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.012578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.012749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.012784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.012921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.012973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.013235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.013274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.917 qpair failed and we were unable to recover it. 00:37:43.917 [2024-11-18 18:44:42.013413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.917 [2024-11-18 18:44:42.013450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.013583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.013753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.013788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.013951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.013987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.014146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.014199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.014329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.014364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.014524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.014737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.014788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.014935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.014990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.015150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.015188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.015292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.015327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.015453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.015492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.015642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.015679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.015825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.016273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.016463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.016681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.016845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.016973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.017131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.017325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.017498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.017698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.017880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.017930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.018124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.018180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.018336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.018388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.018552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.018588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.018727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.018777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.018905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.018959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.019109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.019148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.019352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.019417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.019552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.019591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.019778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.019972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.020022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.020284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.020342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.020479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.020516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.020632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.918 [2024-11-18 18:44:42.020668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.918 qpair failed and we were unable to recover it. 00:37:43.918 [2024-11-18 18:44:42.020873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.020927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.021132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.021352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.021542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.021684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.021823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.021977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.022190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.022405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.022614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.022778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.022936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.022974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.023119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.023173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.023348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.023386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.023540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.023713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.023748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.023876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.023929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.024047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.024085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.024281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.024320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.024459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.024498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.024660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.024718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.024883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.024933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.025106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.025162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.025349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.025403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.025542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.025578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.025742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.025793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.025965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.026004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.026197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.026260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.026442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.026481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.026632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.026684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.026816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.027096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.027180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.027401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.919 qpair failed and we were unable to recover it. 00:37:43.919 [2024-11-18 18:44:42.027626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.919 [2024-11-18 18:44:42.027677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.027813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.027848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.027985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.028047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.028271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.028325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.028507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.028561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.028684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.028720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.028837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.028906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.029065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.029107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.029375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.029432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.029574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.029757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.029791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.029896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.029948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.030096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.030134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.030379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.030547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.030586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.030755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.030804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.030990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.031252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.031587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.031944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.032247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.032304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.032452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.032649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.032685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.032845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.032923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.033084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.033125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.033305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.033345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.033461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.033500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.033670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.033706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.033841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.033891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.034940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.034993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.035159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.920 [2024-11-18 18:44:42.035197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.920 qpair failed and we were unable to recover it. 00:37:43.920 [2024-11-18 18:44:42.035307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.035345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.035533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.035574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.035718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.035769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.035954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.036168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.036395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.036599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.036801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.036956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.036992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.037135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.037187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.037327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.037366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.037522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.037560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.037738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.037773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.037927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.037966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.038085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.038123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.038305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.038343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.038483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.038521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.038708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.038744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.038937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.038987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.039126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.039432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.039489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.039653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.039689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.039795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.039830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.040068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.040303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.040471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.040699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.040839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.040971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.041005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.041119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.041154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.041309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.041363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.041538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.041589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.041788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.041838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.042037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.042076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.042371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.042431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.042544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.042596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.042744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.042778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.921 [2024-11-18 18:44:42.042888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.921 [2024-11-18 18:44:42.042921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.921 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.043079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.043113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.043213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.043245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.043408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.043474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.043634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.043702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.043822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.043861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.044073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.044277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.044542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.044732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.044874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.044985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.045020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.045233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.045427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.045481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.045676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.045721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.045840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.045875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.046019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.046073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.046248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.046287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.046641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.046692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.046877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.046933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.047126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.047181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.047460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.047520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.047705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.047741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.047851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.047905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.048034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.048075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.048348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.048498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.048616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.048670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.048815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.048864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.049247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.049394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.049585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.049799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.049973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.050009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.050121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.050270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.050307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.922 [2024-11-18 18:44:42.050454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.922 [2024-11-18 18:44:42.050490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.922 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.050636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.050687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.050838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.050875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.051062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.051118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.051376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.051433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.051561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.051597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.051719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.051755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.051910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.051962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.052156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.052210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.052345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.052380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.052520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.052556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.052679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.052715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.052822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.052858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.053943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.053998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.054263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.054323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.054473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.054514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.054635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.054688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.054820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.054870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.055084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.055159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.055370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.055431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.055598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.055804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.055859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.055998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.056183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.056335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.056492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.056699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.056861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.056901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.057073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.057112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.057287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.057325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.057472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.057510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.057670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.057711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.057907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.057963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.923 qpair failed and we were unable to recover it. 00:37:43.923 [2024-11-18 18:44:42.058177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.923 [2024-11-18 18:44:42.058218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.058374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.058414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.058543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.058582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.058725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.058937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.059130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.059186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.059309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.059361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.059477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.059530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.059675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.059745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.059905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.059946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.060065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.060103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.060256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.060291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.060434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.060489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.060676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.060741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.060876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.061080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.061114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.061224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.061259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.061396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.061431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.061563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.061628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.061893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.061959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.062109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.062155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.062279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.062470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.062508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.062677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.062714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.062851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.062887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.063917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.063954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.064121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.064157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.064263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.924 [2024-11-18 18:44:42.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.924 qpair failed and we were unable to recover it. 00:37:43.924 [2024-11-18 18:44:42.064459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.064509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.064650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.064688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.064901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.064956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.065135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.065239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.065420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.065460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.065644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.065686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.065785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.065822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.066019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.066245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.066284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.066436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.066476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.066622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.066688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.066842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.066881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.067075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.067306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.067524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.067720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.067863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.067996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.068030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.068169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.068203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.068403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.068444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.068578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.068622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.068783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.068833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.069044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.069111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.069297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.069360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.069537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.069576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.069719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.069856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.069892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.070092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.070372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.070431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.070568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.070604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.070728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.070764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.070922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.070961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.071152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.071186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.071443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.071481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.071688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.071726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.071867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.071902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.072092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.072153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.072388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.925 [2024-11-18 18:44:42.072427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.925 qpair failed and we were unable to recover it. 00:37:43.925 [2024-11-18 18:44:42.072546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.072586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.072750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.072800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.073071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.073311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.073504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.073674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.073835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.073976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.074017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.074229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.074288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.074460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.074499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.074676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.074713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.074835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.074902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.075052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.075089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.075217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.075253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.075444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.075503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.075682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.075718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.075847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.075899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.076019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.076291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.076329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.076444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.076484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.076612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.076669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.076819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.077044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.077108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.077290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.077355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.077556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.077595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.077748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.077784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.077911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.077950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.078098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.078133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.078264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.078298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.078460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.078499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.078683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.078719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.078857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.078911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.079028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.079067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.079198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.079233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.079383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.079420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.079613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.079653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.079806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.079841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.926 [2024-11-18 18:44:42.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.926 [2024-11-18 18:44:42.080051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.926 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.080174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.080213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.080347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.080382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.080484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.080518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.080697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.080932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.080970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.081151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.081190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.081312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.081350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.081503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.081701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.081755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.081897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.081943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.082078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.082113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.082246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.082309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.082450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.082489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.082635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.082669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.082827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.082900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.083062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.083103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.083231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.083266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.083394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.083431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.083594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.083658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.084888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.084923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.085075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.085114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.085321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.085382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.085515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.085550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.085696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.085731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.085868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.085903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.086120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.086309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.086480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.086671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.086805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.086974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.087013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.927 [2024-11-18 18:44:42.087164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.927 [2024-11-18 18:44:42.087199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.927 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.087336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.087389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.087575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.087621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.087779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.087815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.088962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.088998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.089184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.089247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.089458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.089657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.089697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.089832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.089884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.090106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.090276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.090447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.090660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.090836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.090990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.091142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.091360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.091534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.091694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.091889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.091924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.092311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.092504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.092693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.092854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.092975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.093886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.093986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.094021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.094153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.928 [2024-11-18 18:44:42.094187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.928 qpair failed and we were unable to recover it. 00:37:43.928 [2024-11-18 18:44:42.094303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.094341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.094536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.094571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.094720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.094771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.094974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.095200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.095341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.095498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.095716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.095883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.095936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.096084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.096296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.096684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.096859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.096964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.097143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.097356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.097539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.097718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.097882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.097917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.098136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.098321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.098356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.098466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.098501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.098635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.098685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.098829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.098867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.099912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.099968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.100129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.100167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.100300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.100504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.929 [2024-11-18 18:44:42.100543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.929 qpair failed and we were unable to recover it. 00:37:43.929 [2024-11-18 18:44:42.100705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.100740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.100910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.101170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.101227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.101348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.101381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.101540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.101592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.101752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.101923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.101957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.102074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.102283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.102334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.102501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.102536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.102659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.102709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.102829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.102867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.103960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.103995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.104162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.104467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.104660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.104846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.104998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.105037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.105189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.105229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.105485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.105523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.105671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.105724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.105844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.105895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.106055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.106091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.106312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.106368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.106513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.106550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.106706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.106741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.106928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.107093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.107299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.107337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.107479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.107517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.107678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.107713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.930 [2024-11-18 18:44:42.107875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.930 [2024-11-18 18:44:42.107910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.930 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.108012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.108046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.108245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.108460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.108498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.108685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.108721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.108858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.108893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.109060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.109127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.109321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.109374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.109495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.109531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.109645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.109682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.109854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.109910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.110075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.110130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.110403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.110463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.110620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.110655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.110771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.110805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.110965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.111197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.111413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.111598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.111796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.111929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.111978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.112103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.112140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.112301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.112340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.112490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.112533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.112719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.112858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.112892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.113053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.113157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.113192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.113354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.113593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.113658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.113769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.113816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.114008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.114061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.114171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.114207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.114389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.114442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.114650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.114832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.114885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.115024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.115227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.115281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.931 [2024-11-18 18:44:42.115444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.931 [2024-11-18 18:44:42.115479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.931 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.115620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.115657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.115782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.115818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.115956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.115991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.116161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.116303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.116468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.116667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.116977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.117960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.117995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.118168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.118203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.118347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.118383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.118563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.118712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.118746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.118938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.119005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.119195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.119258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.119403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.119441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.119578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.119618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.119737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.119772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.119923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.120148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.120350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.120543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.120745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.120929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.120968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.121217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.121351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.121385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.121542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.121599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.121820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.121871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.122045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.122105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.122287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.122342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.122475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.122511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.932 qpair failed and we were unable to recover it. 00:37:43.932 [2024-11-18 18:44:42.122642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.932 [2024-11-18 18:44:42.122678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.122848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.122905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.123116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.123330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.123501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.123678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.123814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.123969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.124149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.124297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.124858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.124915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.125121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.125310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.125349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.125526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.125564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.125740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.125776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.125966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.126092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.126131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.126321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.126359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.126511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.126549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.126712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.126748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.126893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.126944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.127107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.127163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.127330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.127385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.127496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.127532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.127704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.127755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.127948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.127997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.128127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.128168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.128354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.128416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.128532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.128571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.128707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.128870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.128909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.933 [2024-11-18 18:44:42.129014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.933 [2024-11-18 18:44:42.129052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.933 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.129217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.129275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.129482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.129626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.129695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.129874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.129928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.130064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.130103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.130250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.130288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.130426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.130463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.130620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.130673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.130846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.130884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.131055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.131093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.131201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.131240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.131385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.131424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.131571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.131618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.131795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.131845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.132073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.132116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.132248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.132287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.132435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.132487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.132635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.132671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.132828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.132893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.133074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.133138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.133382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.133458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.133681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.133718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.133857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.133912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.134115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.134153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.134267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.134306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.134479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.134518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.134703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.134753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.134878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.134915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.135091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.135129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.135328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.135388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.135579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.135698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.135734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.135858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.135910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.136090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.136130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.136338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.136401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.136581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.136631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.136769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.136806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.934 [2024-11-18 18:44:42.136966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.934 [2024-11-18 18:44:42.137022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.934 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.137295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.137354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.137565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.137604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.137774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.137809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.138005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.138486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.138524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.138706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.138741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.138897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.138946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.139063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.139101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.139251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.139308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.139508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.139563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.139754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.139806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.139984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.140248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.140289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.140522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.140580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.140754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.140788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.140912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.140950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.141135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.141174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.141302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.141364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.141503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.141748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.141863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.141897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.142877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.142914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.143135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.143378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.143517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.143668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.143824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.143988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.144068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.144258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.144314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.144517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.144560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.935 [2024-11-18 18:44:42.144726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.935 [2024-11-18 18:44:42.144761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.935 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.144901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.144936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.145961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.145995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.146176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.146351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.146388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.146506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.146545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.146681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.146716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.146877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.147096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.147322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.147361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.147547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.147585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.147728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.147763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.147959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.148158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.148214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.148454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.148497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.148639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.148676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.148815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.148851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.149234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.149412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.149586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.149832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.149969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.150197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.150349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.150536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.150740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.150929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.151178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.151289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.151328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.151520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.151664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.151701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.151851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.151905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.936 qpair failed and we were unable to recover it. 00:37:43.936 [2024-11-18 18:44:42.152114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.936 [2024-11-18 18:44:42.152181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.152396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.152454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.152598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.152640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.152787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.152941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.152981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.153109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.153168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.153302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.153341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.153484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.153684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.153735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.153889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.153938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.154097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.154138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.154411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.154471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.154657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.154693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.154800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.154836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.155017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.155055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.155223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.155262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.155452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.155591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.155638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.155809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.155859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.156025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.156081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.156262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.156443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.156478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.156620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.156656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.156875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.156930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.157166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.157208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.157472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.157530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.157683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.157719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.157854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.158053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.158092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.158351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.158461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.158630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.158666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.158825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.158860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.159077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.159286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.159500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.159678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.937 [2024-11-18 18:44:42.159850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.937 qpair failed and we were unable to recover it. 00:37:43.937 [2024-11-18 18:44:42.159983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.160020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.160246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.160304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.160455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.160493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.160663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.160823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.160873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.161113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.161305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.161484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.161682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.161852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.161995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.162034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.162210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.162248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.162390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.162429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.162654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.162798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.162836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.163065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.163100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.163267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.163327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.163491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.163529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.163654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.163690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.163852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.163888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.164108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.164298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.164459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.164671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.164816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.164951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.165003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.165179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.165418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.165457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.165620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.165672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.165826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.165861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.166051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.166131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.166367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.166406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.166554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.938 [2024-11-18 18:44:42.166593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.938 qpair failed and we were unable to recover it. 00:37:43.938 [2024-11-18 18:44:42.166733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.166769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.166904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.166956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.167105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.167143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.167344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.167412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.167558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.167596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.167770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.167806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.167995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.168050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.168245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.168303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.168440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.168475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.168589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.168629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.168781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.168830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.168987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.169190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.169328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.169496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.169646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.169812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.169847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.170000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.170038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.170178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.170217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.170389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.170599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.170660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.170792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.170827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.171076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.171118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.171292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.171483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.171518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.171701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.171755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.171910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.171963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.172137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.172338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.172474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.172657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.172821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.172975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.173014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.173219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.173295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.173497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.173549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.173711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.173841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.173893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.939 [2024-11-18 18:44:42.174054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.939 [2024-11-18 18:44:42.174114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.939 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.174267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.174320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.174423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.174602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.174650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.174758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.174977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.175015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.175172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.175270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.175456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.175493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.175650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.175852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.175902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.176100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.176346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.176416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.176584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.176625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.176810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.176868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.177057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.177111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.177285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.177348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.177495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.177651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.177691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.177876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.177916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.178042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.178080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.178218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.178256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.178447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.178633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.178790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.178825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.179046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.179228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.179413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.179619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.179813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.179965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.180003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.180253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.180313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.180449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.180499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.180633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.180686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.180821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.180857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.181002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.181040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.181209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.181248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.181422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.181618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.181670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.940 qpair failed and we were unable to recover it. 00:37:43.940 [2024-11-18 18:44:42.181778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.940 [2024-11-18 18:44:42.181814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.181967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.182180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.182364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.182534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.182690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.182865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.182919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.183081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.183254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.183438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.183659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.183826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.183973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.184011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.184236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.184273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.184411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.184449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.184578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.184656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.184813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.184863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.185893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.185943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.186947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.186985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.187145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.187183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.187351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.187388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.187561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.187599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.187794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.187828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.187944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.187980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.188156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.188194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.188343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.188380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.188562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.188595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.188738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.941 [2024-11-18 18:44:42.188773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.941 qpair failed and we were unable to recover it. 00:37:43.941 [2024-11-18 18:44:42.188946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.188984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.189170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.189349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.189500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.189538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.189722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.189777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.189922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.189960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.190078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.190118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.190281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.190552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.190646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.190841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.190878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.191049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.191105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.191342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.191399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.191574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.191622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.191802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.191838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.192085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.192142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.192316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.192354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.192486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.192521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.192657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.192836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.192871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.193005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.193043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.193243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.193282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.193419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.193457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.193618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.193669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.193822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.193871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.194092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.194301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.194516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.194711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.194881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.195845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.942 [2024-11-18 18:44:42.195980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.942 [2024-11-18 18:44:42.196015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.942 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.196123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.196178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.196306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.196494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.196528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.196628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.196663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.196806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.196844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.197923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.197978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.198146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.198187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.198366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.198405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.198563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.198600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.198756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.198791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.198948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.198987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.199245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.199296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.199474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.199513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.199624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.199677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.199844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.199879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.200158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.200214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.200396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.200434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.200578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.200628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.200781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.200934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.200987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.201160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.201198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.943 [2024-11-18 18:44:42.201382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.943 [2024-11-18 18:44:42.201420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.943 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.201593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.201656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.201811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.202093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.202135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.202252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.202292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.202411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.202449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.202628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.202679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.202804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.202843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.203064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.203298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.203459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.203673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.203850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.203982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.204186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.204328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.204462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.204855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.204892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.205928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.205977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.206126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.206171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.206316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.206351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.206489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.206524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.206665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.206701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.206885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.207042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.207078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.944 qpair failed and we were unable to recover it. 00:37:43.944 [2024-11-18 18:44:42.207232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.944 [2024-11-18 18:44:42.207287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:43.945 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.207450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.207486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.207620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.207657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.207793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.207848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.208035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.208192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.208230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.227 [2024-11-18 18:44:42.208435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.227 [2024-11-18 18:44:42.208472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.227 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.208578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.208621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.208727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.208762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.208865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.208902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.209956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.209992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.210144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.210183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.210360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.210522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.210573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.210735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.210771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.210927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.210982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.211155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.211222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.211379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.211511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.211545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.211682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.211720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.211831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.211869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.212043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.212080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.212308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.212361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.212492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.212527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.212722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.212777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.212912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.212953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.213104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.213149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.213359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.213433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.213558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.213596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.213733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.213769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.213958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.214011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.214113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.214148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.214353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.214424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.214605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.214669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.214801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.214838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.214988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.215044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.215258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.215316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.215474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.215513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.228 [2024-11-18 18:44:42.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.228 [2024-11-18 18:44:42.215751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.228 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.215978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.216111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.216335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.216406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.216560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.216618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.216819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.216868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.217155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.217213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.217425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.217484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.217654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.217689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.217847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.217897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.218057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.218098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.218316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.218467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.218506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.218682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.218732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.218896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.219175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.219340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.219392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.219550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.219585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.219730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.219764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.219926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.219967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.220176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.220274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.220400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.220602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.220668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.220806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.220842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.220968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.221022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.221296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.221427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.221479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.221626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.221699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.221873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.222945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.222995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.223260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.223322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.223491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.223545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.223697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.229 qpair failed and we were unable to recover it. 00:37:44.229 [2024-11-18 18:44:42.223856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.229 [2024-11-18 18:44:42.223906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.224066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.224278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.224343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.224454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.224492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.224689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.224740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.224911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.224951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.225204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.225456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.225515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.225641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.225696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.225832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.225868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.226057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.226103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.226338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.226379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.226520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.226558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.226717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.226753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.226907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.226946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.227140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.227206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.227412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.227585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.227790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.227824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.227942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.227991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.228155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.228213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.228375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.228429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.228590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.228633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.228789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.229002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.229043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.229307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.229368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.229542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.229578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.229714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.229764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.229928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.229998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.230263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.230472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.230517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.230648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.230700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.230849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.230884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.231058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.231235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.231421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.231577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.230 [2024-11-18 18:44:42.231802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.230 qpair failed and we were unable to recover it. 00:37:44.230 [2024-11-18 18:44:42.231971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.232116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.232259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.232466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.232654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.232877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.232934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.233091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.233126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.233254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.233307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.233537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.233574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.233750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.233961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.234134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.234173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.234378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.234417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.234603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.234765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.234815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.235005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.235055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.235219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.235277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.235523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.235580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.235733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.235769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.235955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.236004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.236196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.236272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.236469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.236532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.236696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.236731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.236887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.236957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.237177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.237228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.237549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.237605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.237814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.237989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.238027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.238176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.238254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.238396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.238435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.238574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.238621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.238786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.238822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.238993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.239195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.239400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.239559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.239741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.239893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.231 [2024-11-18 18:44:42.239928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.231 qpair failed and we were unable to recover it. 00:37:44.231 [2024-11-18 18:44:42.240065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.240099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.240254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.240292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.240466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.240505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.240652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.240818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.240853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.241048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.241083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.241232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.241379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.241417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.241580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.241626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.241790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.242051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.242101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.242272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.242315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.242497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.242537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.242713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.242750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.242866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.242916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.243139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.243316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.243514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.243689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.243832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.243970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.244023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.244184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.244223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.244457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.244592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.244641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.244822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.244871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.245049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.245148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.245284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.245322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.245474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.245612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.245646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.245805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.245839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.246019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.246056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.232 [2024-11-18 18:44:42.246195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.232 [2024-11-18 18:44:42.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.232 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.246382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.246419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.246538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.246576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.246762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.246817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.246985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.247163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.247350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.247516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.247713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.247900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.247942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.248121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.248160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.248339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.248377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.248523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.248562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.248711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.248754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.248909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.248950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.249100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.249138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.249273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.249327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.249503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.249721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.249770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.249983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.250143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.250197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.250353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.250408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.250544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.250579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.250744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.250793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.250951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.250992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.251173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.251348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.251386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.251558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.251622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.251777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.251949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.251987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.252205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.252244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.252461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.252519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.252663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.252697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.252832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.252886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.253003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.253042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.253233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.253270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.253390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.253427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.253602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.253685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.233 qpair failed and we were unable to recover it. 00:37:44.233 [2024-11-18 18:44:42.253810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.233 [2024-11-18 18:44:42.253860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.253978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.254175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.254369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.254533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.254741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.254936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.254975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.255152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.255190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.255414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.255472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.255621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.255693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.255890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.255928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.256074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.256112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.256248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.256324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.256495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.256532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.256665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.256701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.256881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.256947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.257089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.257144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.257329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.257386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.257504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.257542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.257714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.257748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.257884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.257919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.258077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.258114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.258307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.258344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.258462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.258500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.258673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.258872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.258905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.259059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.259096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.259272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.259310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.259498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.259536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.259692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.259725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.259915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.260097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.260317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.260510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.260709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.260864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.260997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.261041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.261243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.234 [2024-11-18 18:44:42.261281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.234 qpair failed and we were unable to recover it. 00:37:44.234 [2024-11-18 18:44:42.261402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.261440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.261562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.261752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.261897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.261930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.262934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.262968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.263130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.263359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.263500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.263689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.263831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.263986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.264215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.264387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.264552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.264934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.265160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.265199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.265321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.265501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.265538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.265664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.265698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.265819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.265855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.266900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.266952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.267096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.267130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.267240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.267275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.267396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.267450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.267595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.267638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.267776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.267825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.235 [2024-11-18 18:44:42.268025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.235 [2024-11-18 18:44:42.268066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.235 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.268232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.268302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.268433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.268472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.268675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.268711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.268867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.268920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.269141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.269198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.269405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.269466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.269602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.269658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.269790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.269823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.269957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.270006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.270215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.270315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.270467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.270505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.270627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.270685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.270847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.270906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.271072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.271126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.271325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.271384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.271519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.271553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.271718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.271765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.271932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.271971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.272150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.272187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.272386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.272483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.272595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.272657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.272797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.272831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.272980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.273179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.273379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.273582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.273764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.273989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.274186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.274223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.274365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.274402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.274512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.274548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.274758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.274988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.275036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.275181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.275222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.275382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.275499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.275533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.236 qpair failed and we were unable to recover it. 00:37:44.236 [2024-11-18 18:44:42.275667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.236 [2024-11-18 18:44:42.275706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.275851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.275904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.276930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.276968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.277188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.277373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.277558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.277738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.277869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.277996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.278049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.278339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.278472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.278510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.278691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.278739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.278897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.278945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.279135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.279190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.279339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.279392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.279523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.279557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.279675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.279710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.279863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.280158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.280199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.280352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.280432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.280590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.280634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.280769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.280975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.281121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.281156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.281587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.281636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.281757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.281790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.281955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.282011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.237 [2024-11-18 18:44:42.282154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.237 qpair failed and we were unable to recover it. 00:37:44.237 [2024-11-18 18:44:42.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.282387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.282543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.282581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.282746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.282811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.282973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.283189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.283382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.283563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.283743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.283911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.283945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.284080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.284113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.284307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.284457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.284492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.284612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.284659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.284859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.285918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.285952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.286141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.286339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.286508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.286678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.286860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.286997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.287134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.287262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.287453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.287618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.287793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.287841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.288066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.288141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.288379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.288446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.288654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.288688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.288820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.288867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.289143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.289203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.289352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.289432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.289559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.238 [2024-11-18 18:44:42.289596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.238 qpair failed and we were unable to recover it. 00:37:44.238 [2024-11-18 18:44:42.289756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.289967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.290022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.290276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.290335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.290495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.290530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.290676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.290711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.290864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.290917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.291153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.291212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.291390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.291452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.291650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.291691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.291822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.291869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.292310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.292670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.292835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.292986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.293183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.293216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.293438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.293475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.293621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.293673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.293805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.293839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.293986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.294019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.294199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.294235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.294362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.294398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.294546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.294702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.294750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.294991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.295045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.295271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.295312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.295461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.295499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.295719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.295886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.295920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.296033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.296070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.296233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.296288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.296432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.296471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.296621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.296674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.296787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.296820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.297045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.297078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.297231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.297267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.239 [2024-11-18 18:44:42.297374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.239 [2024-11-18 18:44:42.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.239 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.297574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.297713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.297760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.297922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.297961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.298181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.298220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.298335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.298389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.298536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.298570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.298740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.298774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.299919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.299966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.300142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.300180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.300377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.300443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.300587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.300631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.300759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.300793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.300927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.300994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.301120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.301161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.301362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.301461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.301601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.301781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.301814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.301945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.302080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.302132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.302254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.302291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.302438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.302476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.302624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.302676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.302822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.302857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.303090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.303238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.303276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.303425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.303463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.303641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.303689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.303877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.303930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.304060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.304113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.304260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.304297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.304494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.304530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.304672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.304705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.240 qpair failed and we were unable to recover it. 00:37:44.240 [2024-11-18 18:44:42.304815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.240 [2024-11-18 18:44:42.304850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.304992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.305232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.305422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.305624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.305950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.305987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.306199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.306233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.306369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.306406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.306561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.306594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.306715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.306748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.306872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.306912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.307091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.307129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.307301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.307344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.307454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.307491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.307623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.307664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.307828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.307864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.308955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.308988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.309176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.309213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.309358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.309394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.309513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.309549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.309722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.309769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.309922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.309957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.310963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.241 [2024-11-18 18:44:42.310997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.241 qpair failed and we were unable to recover it. 00:37:44.241 [2024-11-18 18:44:42.311151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.311195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.311303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.311355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.311476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.311513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.311698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.311731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.311829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.311862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.312039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.312073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.312238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.312274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.312449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.312485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.312619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.312658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.312791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.312823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.313098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.313278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.313496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.313691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.313859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.313986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.314183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.314347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.314525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.314756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.314914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.314949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.315116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.315346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.315530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.315744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.315875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.315997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.316033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.316174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.316211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.316355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.316392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.316559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.316613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.316778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.316812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.317038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.317079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.317203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.317241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.317559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.317695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.317729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.317862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.317910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.318060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.318093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.242 [2024-11-18 18:44:42.318230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.242 [2024-11-18 18:44:42.318282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.242 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.318431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.318468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.318600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.318640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.318739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.318772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.318933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.318967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.319194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.319232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.319342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.319378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.319550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.319588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.319746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.319952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.320172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.320375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.320569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.320767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.320953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.320993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.321119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.321172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.321370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.321428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.321573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.321749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.321782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.321936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.321974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.322140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.322197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.322343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.322380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.322524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.322568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.322714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.322861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.323968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.324093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.324126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.324285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.324323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.324524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.324568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.324739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.324787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.324928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.324963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.325226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.325285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.325547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.325604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.243 [2024-11-18 18:44:42.325756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.243 [2024-11-18 18:44:42.325792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.243 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.325930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.325963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.326137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.326335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.326523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.326717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.326855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.326987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.327194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.327346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.327384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.327563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.327595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.327784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.327833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.328943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.328978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.329196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.329234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.329413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.329481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.329628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.329678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.329797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.329845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.330033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.330171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.330365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.330443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.330587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.330650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.330848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.330916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.331104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.331283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.331457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.331658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.331829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.331975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.332010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.332233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.332294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.332547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.332616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.332750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.332783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.332944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.332998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.333135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.333190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.244 qpair failed and we were unable to recover it. 00:37:44.244 [2024-11-18 18:44:42.333323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.244 [2024-11-18 18:44:42.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.333524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.333562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.333729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.333778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.333919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.333953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.334099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.334137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.334363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.334420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.334534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.334570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.334734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.334770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.334936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.334975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.335172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.335509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.335568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.335742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.335777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.335951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.335984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.336103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.336137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.336293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.336365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.336565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.336603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.336744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.336778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.336927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.336964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.337164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.337223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.337366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.337403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.337577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.337630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.337758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.337791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.337939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.337996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.338135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.338191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.338452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.338508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.338698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.338909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.338969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.339289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.339409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.339446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.339586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.339640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.339812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.339860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.340127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.340200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.340324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.340364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.340515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.340552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.340698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.340733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.340837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.340870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.341058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.341134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.341370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.245 [2024-11-18 18:44:42.341433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.245 qpair failed and we were unable to recover it. 00:37:44.245 [2024-11-18 18:44:42.341579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.341631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.341779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.341840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.341995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.342178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.342345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.342540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.342745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.342958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.342996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.343156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.343324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.343357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.343512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.343549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.343726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.343760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.343903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.343958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.344161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.344195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.344446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.344511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.344681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.344716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.344964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.345266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.345319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.345473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.345512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.345702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.345738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.345852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.345897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.346058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.346098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.346249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.346287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.346438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.346474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.346629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.346681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.346855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.346917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.347144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.347465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.347662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.347839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.347988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.348023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.348221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.246 qpair failed and we were unable to recover it. 00:37:44.246 [2024-11-18 18:44:42.348366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.246 [2024-11-18 18:44:42.348402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.348545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.348581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.348763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.348962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.348997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.349306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.349339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.349600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.349665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.349816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.349864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.350157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.350217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.350402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.350470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.350671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.350706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.350847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.350904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.351047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.351082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.351285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.351353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.351468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.351505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.351641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.351676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.351830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.351903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.352026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.352066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.352247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.352284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.352403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.352441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.352601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.352664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.352812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.352859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.353933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.353967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.354133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.354281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.354462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.354672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.354846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.354977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.355014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.355215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.355281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.355425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.355463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.355633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.355669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.355823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.247 [2024-11-18 18:44:42.355875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.247 qpair failed and we were unable to recover it. 00:37:44.247 [2024-11-18 18:44:42.356028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.356083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.356268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.356320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.356522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.356662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.356803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.356856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.357032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.357130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.357315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.357376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.357502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.357541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.357708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.357744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.357847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.357879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.358039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.358073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.358298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.358360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.358529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.358566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.358724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.358758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.358864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.358901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.359057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.359109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.359390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.359618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.359671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.359804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.359838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.359989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.360023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.360249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.360308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.360425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.360461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.360625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.360659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.360790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.360823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.360966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.361848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.361977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.362158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.362367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.362547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.362739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.362879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.362911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.363053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.363086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.248 [2024-11-18 18:44:42.363253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.248 [2024-11-18 18:44:42.363304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.248 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.363438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.363471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.363656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.363691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.363800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.363833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.363967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.364160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.364334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.364710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.364876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.364916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.365068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.365246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.365613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.365840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.365898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.366080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.366136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.366354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.366391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.366569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.366603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.366729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.366763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.366898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.366949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.367092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.367128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.367302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.367339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.367479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.367516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.367672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.367705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.367869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.367909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.368057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.368093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.368230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.368267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.368465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.368506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.368648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.368713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.368832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.368869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.369861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.369901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.370048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.370084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.370204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.370255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.249 qpair failed and we were unable to recover it. 00:37:44.249 [2024-11-18 18:44:42.370392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.249 [2024-11-18 18:44:42.370428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.370567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.370605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.370810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.370858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.371039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.371095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.371278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.371316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.371488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.371522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.371670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.371703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.371833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.371871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.372057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.372235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.372451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.372650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.372832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.372980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.373162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.373325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.373539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.373731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.373949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.373989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.374134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.374186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.374337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.374374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.374516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.374552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.374716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.374765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.374874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.374911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.375122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.375251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.375406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.375439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.375542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.250 qpair failed and we were unable to recover it. 00:37:44.250 [2024-11-18 18:44:42.375700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.250 [2024-11-18 18:44:42.375734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.375840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.375880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.375982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.376927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.376961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.377078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.377115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.377327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.377389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.377504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.377553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.377728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.377898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.377931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.378109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.378146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.378389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.378440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.378624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.378664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.378776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.378809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.378935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.378972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.379097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.379192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.379366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.379403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.379521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.379558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.379730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.379763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.379862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.379918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.380944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.380996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.381112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.381148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.381317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.381353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.251 [2024-11-18 18:44:42.381525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.251 qpair failed and we were unable to recover it. 00:37:44.251 [2024-11-18 18:44:42.381692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.381823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.381857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.381994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.382155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.382354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.382732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.382938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.383071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.383109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.383295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.383331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.383503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.383539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.383713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.383746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.383870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.384144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.384184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.384301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.384337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.384492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.384528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.384691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.384724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.384826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.384858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.385032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.385224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.385573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.385845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.385994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.386185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.386238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.386362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.386415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.386575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.386617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.386761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.386796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.386975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.387013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.387152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.252 [2024-11-18 18:44:42.387193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.252 qpair failed and we were unable to recover it. 00:37:44.252 [2024-11-18 18:44:42.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.387407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.387546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.387582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.387730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.387893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.387933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.388062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.388113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.388267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.388300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.388557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.388600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.388755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.388968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.389008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.389182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.389219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.389366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.389403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.389522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.389559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.389754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.389787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.389970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.390130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.390370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.390577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.390754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.390930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.390968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.391927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.391965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.392097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.392146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.392291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.392327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.392536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.392573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.392736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.392784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.392939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.392990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.393200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.393265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.393464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.393524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.393698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.393732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.393842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.393874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.394015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.253 [2024-11-18 18:44:42.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.253 qpair failed and we were unable to recover it. 00:37:44.253 [2024-11-18 18:44:42.394191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.394229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.394388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.394420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.394572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.394766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.394814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.394963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.394999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.395185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.395223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.395370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.395408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.395560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.395748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.395782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.395883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.396100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.396270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.396325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.396447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.396486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.396669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.396703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.396837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.397923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.397973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.398159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.398193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.398306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.398341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.398453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.398493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.398675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.398724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.398838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.398873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.399912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.399945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.400069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.400120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.400268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.400304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.400473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.254 [2024-11-18 18:44:42.400635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.254 [2024-11-18 18:44:42.400668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.254 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.400791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.400823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.400962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.400995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.401173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.401210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.401328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.401364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.401484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.401517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.401654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.401688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.401823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.401856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.402888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.402921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.403023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.403055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.403234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.403270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.403412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.403448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.403613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.403647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.403811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.404052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.404093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.404256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.404291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.404458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.404495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.404650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.404716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.404860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.404898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.255 [2024-11-18 18:44:42.405942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.255 [2024-11-18 18:44:42.405976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.255 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.406110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.406144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.406255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.406289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.406420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.406453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.406601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.406642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.406796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.406843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.407946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.407980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.408080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.408113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.408330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.408399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.408580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.408625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.408803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.408836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.409045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.409300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.409389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.409538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.409573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.409719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.409753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.409881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.409934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.410112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.410164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.410315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.410368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.410529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.410578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.410729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.410766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.410896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.410965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.411173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.411274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.411505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.411562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.411701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.411735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.411870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.412023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.412211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.256 [2024-11-18 18:44:42.412408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.256 [2024-11-18 18:44:42.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.256 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.412631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.412681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.412792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.412825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.413002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.413050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.413216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.413272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.413408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.413460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.413624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.413658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.413788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.413846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.414005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.414056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.414222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.414372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.414420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.414612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.414661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.414800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.414848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.415053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.415106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.415258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.415324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.415484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.415517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.415664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.415822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.415874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.416996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.417262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.417320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.417445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.417482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.417617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.417651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.417782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.417831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.417961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.418016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.418166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.418204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.418380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.418418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.418583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.418651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.418819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.418854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.419138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.419198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.419409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.419444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.419675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.419708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.419857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.257 [2024-11-18 18:44:42.419909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.257 qpair failed and we were unable to recover it. 00:37:44.257 [2024-11-18 18:44:42.420059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.420111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.420259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.420311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.420448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.420482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.420653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.420687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.420799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.420832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.421012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.421048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.421266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.421331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.421445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.421482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.421641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.421675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.421809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.421861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.422026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.422084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.422244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.422339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.422470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.422503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.422639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.422813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.422848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.423900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.423953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.424106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.424159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.424337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.424502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.424536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.424687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.424739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.424892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.424926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.425060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.425095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.425234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.425267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.425401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.425434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.425569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.425613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.425807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.426010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.426070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.426202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.426297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.426442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.426478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.426624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.426676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.426850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.426898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.427051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.427104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.258 [2024-11-18 18:44:42.427236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.258 [2024-11-18 18:44:42.427289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.258 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.427413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.427585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.427629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.427796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.427844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.427964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.428105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.428267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.428410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.428601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.428795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.428850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.429930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.429984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.430242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.430317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.430451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.430526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.430697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.430732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.430865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.431015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.431067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.431242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.431279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.431475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.431512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.431635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.431686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.431824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.431857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.432915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.432948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.433074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.433110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.433268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.433322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.433462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.433500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.433654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.433838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.433902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.434072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.434124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.434334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.434374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.259 qpair failed and we were unable to recover it. 00:37:44.259 [2024-11-18 18:44:42.434516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.259 [2024-11-18 18:44:42.434554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.434695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.434854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.434902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.435038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.435094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.435335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.435387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.435507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.435545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.435739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.435773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.435984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.436051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.436291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.436349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.436474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.436510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.436645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.436682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.436814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.436866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.437962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.437994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.438928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.439158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.439210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.439358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.439410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.439541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.439574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.439728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.439780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.439988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.440055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.440329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.440510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.440548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.440704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.440742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.440917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.260 [2024-11-18 18:44:42.440970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.260 qpair failed and we were unable to recover it. 00:37:44.260 [2024-11-18 18:44:42.441124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.441202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.441404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.441459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.441633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.441667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.441777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.441811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.441962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.442015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.442321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.442513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.442642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.442677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.442901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.443057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.443117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.443243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.443296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.443474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.443521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.443663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.443845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.443898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.444186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.444408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.444446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.444559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.444597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.444778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.444832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.444991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.445243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.445417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.445599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.445767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.445962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.445995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.446131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.446164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.446330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.446431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.446465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.446586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.446660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.446860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.446911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.447854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.447921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.448117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.448171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.448386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.448426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.261 [2024-11-18 18:44:42.448566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.261 [2024-11-18 18:44:42.448604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.261 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.448756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.448790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.448926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.448970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.449170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.449382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.449440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.449570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.449603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.449756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.449789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.449925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.449958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.450232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.450290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.450441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.450478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.450626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.450678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.450790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.450950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.451155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.451310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.451524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.451724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.451903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.451940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.452209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.452264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.452414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.452452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.452576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.452617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.452797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.452844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.453003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.453223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.453260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.453387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.453424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.453595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.453657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.453798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.453831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.453981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.454017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.454188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.454224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.454433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.454471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.454620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.454673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.454816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.454849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.455961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.455994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.262 qpair failed and we were unable to recover it. 00:37:44.262 [2024-11-18 18:44:42.456114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.262 [2024-11-18 18:44:42.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.456405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.456442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.456585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.456784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.456817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.456924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.456957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.457123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.457259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.457428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.457636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.457854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.457958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.458887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.459206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.459416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.459578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.459788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.459917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.460164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.460313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.460468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.460624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.460824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.460967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.461138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.461350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.461499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.461655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.461823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.461856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.462000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.462038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.462227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.462433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.462472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.462620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.462671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.462801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.263 [2024-11-18 18:44:42.462835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.263 qpair failed and we were unable to recover it. 00:37:44.263 [2024-11-18 18:44:42.463016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.463053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.463189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.463225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.463424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.463461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.463628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.463678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.463823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.463980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.464884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.464917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.465052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.465100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.465224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.465277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.465489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.465528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.465675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.465711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.465870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.465909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.466075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.466245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.466474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.466655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.466825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.466972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.467009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.467180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.467258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.467405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.467442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.467597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.467656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.467804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.467851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.468986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.469131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.469166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.469276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.469311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.469445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.469479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.469621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.469655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.264 [2024-11-18 18:44:42.469762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.264 [2024-11-18 18:44:42.469797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.264 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.469994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.470154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.470285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.470428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.470570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.470829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.470971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.471171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.471223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.471362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.471396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.471555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.471596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.471764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.471815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.472135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.472357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.472520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.472703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.472853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.472995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.473134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.473328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.473514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.473665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.473827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.473874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.474844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.474881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.475024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.475060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.475201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.475237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.475416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.475469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.475564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.475599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.265 [2024-11-18 18:44:42.475778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.265 [2024-11-18 18:44:42.475831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.265 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.475986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.476303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.476452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.476620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.476810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.476857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.477828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.477861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.478070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.478440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.478658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.478838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.478967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.479000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.479177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.479410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.479447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.479640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.479768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.479801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.479945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.480116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.480356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.480519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.480763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.480938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.480982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.481150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.481348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.481382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.481552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.481585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.481736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.481769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.481926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.482181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.482218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.482365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.482402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.482545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.482584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.482734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.482782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.266 qpair failed and we were unable to recover it. 00:37:44.266 [2024-11-18 18:44:42.482907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.266 [2024-11-18 18:44:42.482943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.483196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.483251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.483548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.483743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.483895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.483945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.484282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.484644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.484806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.484972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.485153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.485305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.485477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.485678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.485934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.486080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.486132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.486335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.486546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.486581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.486699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.486734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.486863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.486910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.487121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.487295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.487465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.487639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.487836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.487989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.488846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.489190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.489402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.489599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.489740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.489875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.267 [2024-11-18 18:44:42.489909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.267 qpair failed and we were unable to recover it. 00:37:44.267 [2024-11-18 18:44:42.490048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.490081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.490211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.490445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.490477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.490673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.490707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.490868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.490919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.491112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.491238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.491290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.491468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.491505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.491661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.491694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.491854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.491887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.492030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.492083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.492217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.492255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.492369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.492405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.492586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.492638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.492807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.492855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.493084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.493151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.493322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.493377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.493495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.493529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.493672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.493707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.493833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.493873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.494857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.494901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.495054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.495090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.495283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.495339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.495524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.495558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.495710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.495743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.495842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.495896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.496047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.496091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.496298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.496334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.496512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.496548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.496721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.496757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.496916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.496949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.497055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.268 [2024-11-18 18:44:42.497108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.268 qpair failed and we were unable to recover it. 00:37:44.268 [2024-11-18 18:44:42.497221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.497259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.497465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.497502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.497651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.497685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.497786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.497820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.497934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.497967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.498075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.498109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.498272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.498478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.498515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.498730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.498765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.498908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.498941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.499094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.499144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.499301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.499342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.499496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.499533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.499727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.499780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.499965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.500139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.500298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.500485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.500651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.500850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.500885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.501150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.501336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.501522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.501867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.501984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.502017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.502231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.502383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.502420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.502603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.502678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.502800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.502848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.503959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.503991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.504135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.269 [2024-11-18 18:44:42.504168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.269 qpair failed and we were unable to recover it. 00:37:44.269 [2024-11-18 18:44:42.504413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.504603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.504643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.504751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.504783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.504934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.504972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.505132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.505184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.505319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.505354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.505510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.505545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.505739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.505773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.505924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.505961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.506125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.506159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.506306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.506517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.506550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.506717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.506751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.506859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.506891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.507963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.507996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.508155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.508192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.508364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.508400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.508543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.508578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.508737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.508771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.508927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.508963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.509119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.509152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.509346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.509384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.509538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.509574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.509709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.509743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.509876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.509928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.510118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.510152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.510318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.510355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.510501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.510537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.510711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.510744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.510880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.511039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.511072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.270 qpair failed and we were unable to recover it. 00:37:44.270 [2024-11-18 18:44:42.511255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.270 [2024-11-18 18:44:42.511299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.511530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.511567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.511717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.511750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.511854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.511888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.512058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.512277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.512473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.512667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.512859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.512993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.513196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.513396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.513548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.513744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.513965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.514925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.514975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.515350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.515562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.515738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.515878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.515994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.516167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.516360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.516567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.516768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.516937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.516970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.517095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.517128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.517311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.517346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.517573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.517619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.517771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.517803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.517952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.517988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.518143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.518174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.518305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.518337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.518504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.518542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.518759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.518798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.271 qpair failed and we were unable to recover it. 00:37:44.271 [2024-11-18 18:44:42.518977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.271 [2024-11-18 18:44:42.519035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.519144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.519177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.519277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.519309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.519426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.519458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.519617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.519654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.519818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.519851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.520928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.520971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.521107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.521140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.521335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.521466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.521500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.521689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.521727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.521839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.521875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.522933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.522970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.523155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.523189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.523296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.523346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.523530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.523566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.523716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.523749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.523883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.523917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.524112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.524437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.524667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.524879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.524991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.525871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.525993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.526029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.272 [2024-11-18 18:44:42.526185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.272 [2024-11-18 18:44:42.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.272 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.526381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.526413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.526566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.526601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.526812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.526995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.527946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.527980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.528136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.528168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.528337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.528374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.528535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.528797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.528852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.529108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.529277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.529463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.529657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.529827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.529969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.530139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.530276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.530466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.530666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.530921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.531102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.531160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.531269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.531306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.531476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.531675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.531724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.531846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.531882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.532904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.532937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.533070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.533105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.533303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.533454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.533497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.533650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.533684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.533848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.534004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.534047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.273 [2024-11-18 18:44:42.534213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.273 qpair failed and we were unable to recover it. 00:37:44.273 [2024-11-18 18:44:42.534349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.534383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.534546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.534584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.534730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.534764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.534906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.534940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.535128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.535189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.535390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.535424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.535604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.535664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.535857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.535891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.536888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.536942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.537965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.537999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.538105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.538139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.538292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.538346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.538521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.538556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.538706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.538741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.538956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.539019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.539203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.539320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.274 [2024-11-18 18:44:42.539352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.274 qpair failed and we were unable to recover it. 00:37:44.274 [2024-11-18 18:44:42.539464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.539497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.539623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.539657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.539797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.539831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.540010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.540157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.540190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.540314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.540347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.540450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.564 [2024-11-18 18:44:42.540487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.564 qpair failed and we were unable to recover it. 00:37:44.564 [2024-11-18 18:44:42.540643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.540701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.540804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.540864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.541044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.541219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.541254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.541387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.541434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.541649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.541685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.541823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.541859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.542923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.542960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.543076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.543131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.543318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.543392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.543574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.543637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.543816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.543850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.544912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.544948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.545099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.545136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.545281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.545317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.545426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.545475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.545632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.545671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.545843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.545880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.546088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.546128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.546410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.546480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.546651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.546687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.546842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.565 [2024-11-18 18:44:42.546895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.565 qpair failed and we were unable to recover it. 00:37:44.565 [2024-11-18 18:44:42.547046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.547083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.547224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.547261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.547390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.547424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.547566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.547603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.547775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.547809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.547976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.548011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3140258 Killed "${NVMF_APP[@]}" "$@" 00:37:44.566 [2024-11-18 18:44:42.548148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.548183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.548328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.548360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:44.566 [2024-11-18 18:44:42.548467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.548501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:44.566 [2024-11-18 18:44:42.548642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:44.566 [2024-11-18 18:44:42.548691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.566 [2024-11-18 18:44:42.548840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.548876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:44.566 [2024-11-18 18:44:42.549050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.549136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.549337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.549407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.549556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.549603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.549747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.549781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.549941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.549979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.550162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.550200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.550318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.550372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.550517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.550555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.550718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.550753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.550892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.550938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.551095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.551143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.551308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.551349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.551548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.551628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.551786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.551935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.551973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.552147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.552324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.552502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.552675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.566 [2024-11-18 18:44:42.552820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.566 qpair failed and we were unable to recover it. 00:37:44.566 [2024-11-18 18:44:42.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.552983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.553109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.553158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3140904 00:37:44.567 [2024-11-18 18:44:42.553306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.553346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3140904 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3140904 ']' 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.567 [2024-11-18 18:44:42.554276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 18:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:44.567 [2024-11-18 18:44:42.554322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.554513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.554552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.554742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.554777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.554892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.554946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.555118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.555158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.555336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.555382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.558662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.558837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.558885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.559096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.559139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.559298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.559344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.559576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.559777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.559812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.559957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.559991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.560142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.560176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.560421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.560455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.560656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.560691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.560836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.560871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.561049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.561088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.561220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.561259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.561448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.561486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.561684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.561718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.561864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.561914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.562291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.562489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.562669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.562817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.562989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.563170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.563204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.567 [2024-11-18 18:44:42.563328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.567 [2024-11-18 18:44:42.563362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.567 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.565626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.565682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.565859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.565908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.566083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.566246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.566422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.566674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.566831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.566983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.567158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.567333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.567541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.567717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.567887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.567929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.568121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.568288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.568323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.568492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.568525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.571633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.571693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.571847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.572061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.572106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.572254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.572312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.572480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.572520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.572678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.572713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.572878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.573103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.573142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.573299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.573337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.573520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.573567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.568 [2024-11-18 18:44:42.573712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.568 [2024-11-18 18:44:42.573747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.568 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.573903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.573954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.574120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.574155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.574329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.574383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.574547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.574585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.574765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.574804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.574974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.575141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.575361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.575541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.575703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.575876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.575918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.576055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.576087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.578736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.578780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.578940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.578980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.579167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.579318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.579355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.579542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.579580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.579734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.579768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.579889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.579922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.580090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.580316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.580475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.580811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.580984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.581179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.581222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.581371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.581408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.581563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.581742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.581776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.581967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.582011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.584622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.584687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.584829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.569 [2024-11-18 18:44:42.584868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.569 qpair failed and we were unable to recover it. 00:37:44.569 [2024-11-18 18:44:42.585024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.585057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.585202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.585235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.585433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.585480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.585623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.585676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.585793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.585827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.585996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.586138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.586308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.586549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.586876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.586910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.587090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.587127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.587268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.587304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.587493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.587660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.587695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.587806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.587838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.588003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.588041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.588247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.588293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.588745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.588781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.588964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.589006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.590627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.590670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.590804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.590843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.590971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.591157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.591322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.591525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.591688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.591823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.591855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.592016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.592049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.592238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.592272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.592393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.570 [2024-11-18 18:44:42.592542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.570 [2024-11-18 18:44:42.592577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.570 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.594660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.594705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.594843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.594884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.595047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.595086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.595244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.595279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.595421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.595455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.595627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.595666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.595867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.595913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.596103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.596292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.596484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.596673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.596832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.596970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.597007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.597168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.597333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.597396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.600626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.600672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.600831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.600872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.601075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.601115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.601250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.601300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.601451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.601484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.601667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.601702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.601849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.601883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.602110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.602281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.602315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.602480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.602515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.602658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.602697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.602854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.602888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.603935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.604148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.604183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.604335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.604389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.604571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.604625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.604751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.604785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.571 qpair failed and we were unable to recover it. 00:37:44.571 [2024-11-18 18:44:42.604936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.571 [2024-11-18 18:44:42.604974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.605104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.605143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.605306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.605341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.607663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.607822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.607862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.608088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.608266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.608422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.608614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.608810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.608990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.609174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.609364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.609565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.609755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.609954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.610166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.610203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.610404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.610439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.610589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.610639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.610821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.613666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.614027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.614061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.614218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.614251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.614457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.614491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.614643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.614695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.614926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.614959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.615094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.615131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.615278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.615314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.615466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.615499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.615617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.615828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.615862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.616061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.616222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.616396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.616817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.572 [2024-11-18 18:44:42.616966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.572 [2024-11-18 18:44:42.617003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.572 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.617201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.617235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.617345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.617395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.617554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.617599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.619623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.619674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.619839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.619877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.620037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.620225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.620419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.620629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.620843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.620995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.621211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.621383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.621577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.621756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.621945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.621978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.622165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.622201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.624661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.624857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.624891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.625039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.625270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.625304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.625471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.625504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.625635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.625672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.625833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.625870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.626102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.626290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.626694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.626843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.626975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.627115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.627255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.627486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.627651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.627873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.627914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.628092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.628129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.628280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.573 [2024-11-18 18:44:42.628313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.573 qpair failed and we were unable to recover it. 00:37:44.573 [2024-11-18 18:44:42.630640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.630684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.630873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.630925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.631078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.631114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.631230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.631264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.631421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.631459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.631620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.631794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.631828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.632921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.632972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.633129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.633166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.633329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.633362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.633508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.633545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.633691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.633742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.636622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.636666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.636808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.637031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.637074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.637250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.637282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.637426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.637459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.637617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.637669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.637835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.637869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.638032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.638070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.638220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.638257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.638421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.638454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.638598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.638667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.638850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.638886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.639121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.639322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.639391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.639544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.639582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.639751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.574 [2024-11-18 18:44:42.639784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.574 qpair failed and we were unable to recover it. 00:37:44.574 [2024-11-18 18:44:42.639969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.640005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.640165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.640202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.640337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.640385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.640511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.640544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.640715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.642627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.642674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.642816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.642870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.643178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.643416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.643464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.643709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.643759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.643954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.644003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.644221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.644431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.644479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.644712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.644859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.644915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.645181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.645418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.645599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.645655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.645803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.645861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.646034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.646080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.646248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.646309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.646510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.646559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.646812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.646866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.647095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.647143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.647313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.647362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.647556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.647622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.647765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.647827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.648124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.648170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.648317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.648559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.648627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.648871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.649014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.649062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.649232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.649281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.649425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.649472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.649675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.649725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.649904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.649952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.650129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.650177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.650317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.575 [2024-11-18 18:44:42.650361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.575 qpair failed and we were unable to recover it. 00:37:44.575 [2024-11-18 18:44:42.650511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.650562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.650730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.650777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.650977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.651023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.651190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.651235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.651390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.651434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.651686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.651853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.652106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.652153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.652312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.652358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.652517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.652562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.652741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.652788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.652959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.653004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.653135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.653178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.653359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.653404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.653556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.653619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.653755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.653964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.654010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.654173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.654411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.654459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.654628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.654676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.654836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.654894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.655030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.655077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.655257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.655303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.655433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.655491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.655680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.655732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.655896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.655942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.656126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.656173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.656303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.656348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.656501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.656546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.656706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.656751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.656908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.656952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.657115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.657162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.657351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.657396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.657633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.657864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.657911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.658080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.658126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.658310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.576 [2024-11-18 18:44:42.658509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.576 [2024-11-18 18:44:42.658555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.576 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.658759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.658962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.659008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.659146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.659192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.659358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.659402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.659598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.659664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.659835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.659878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.660052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.660103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.660243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.660291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.660449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.660495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.660667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.660713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.660892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.660938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.661103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.661149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.661311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.661357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.661548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.661603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.661768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.661813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.661945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.661991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.662126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.662172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.662341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.662387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.662547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.662602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.662748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.662794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.662941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.662986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.663115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.663159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.663322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.663367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.663526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.663571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.663804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.663855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.664961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.664995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.665143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.665281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.665450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.665594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.577 [2024-11-18 18:44:42.665763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.577 qpair failed and we were unable to recover it. 00:37:44.577 [2024-11-18 18:44:42.665901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.665934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.666919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.666952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.667958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.667991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.668939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.668972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.669135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.669317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.669484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.669653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.669820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.669979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.671953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.578 [2024-11-18 18:44:42.671987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.578 qpair failed and we were unable to recover it. 00:37:44.578 [2024-11-18 18:44:42.672098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.672269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.672409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.672543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.672690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.672868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.672902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.673889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.673999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.674191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.674382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.674512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.674682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.674856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.674889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.675870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.675903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.676876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.676908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.677040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.677073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.677176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.677213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.579 [2024-11-18 18:44:42.677354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.579 [2024-11-18 18:44:42.677386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.579 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.677488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.677521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.677652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.677685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.677831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.677863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.677963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.677996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.678952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.678985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.679870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.679904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.680948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.680998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.681965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.681998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.682131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.682164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.682299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.682332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.682469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.682502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.682664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.682698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.682867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.682900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.683041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.683074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.683238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.683271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.683373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.683406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.580 qpair failed and we were unable to recover it. 00:37:44.580 [2024-11-18 18:44:42.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.580 [2024-11-18 18:44:42.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.683643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.683681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.683791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.683825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.683972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.684143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.684322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.684497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.684647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.684818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.684851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.685860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.685894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.686884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.686917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.687917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.688888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.688922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.689066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.689201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.689331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.689479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.581 [2024-11-18 18:44:42.689657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.581 qpair failed and we were unable to recover it. 00:37:44.581 [2024-11-18 18:44:42.689797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.689832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.689960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.690957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.690991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.691959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.691993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.692924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.692958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.693898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.693932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.694901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.694935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.582 [2024-11-18 18:44:42.695840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.582 [2024-11-18 18:44:42.695874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.582 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.696828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.696977] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:44.583 [2024-11-18 18:44:42.697035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697108] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.583 [2024-11-18 18:44:42.697170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.697203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.697412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.697593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.697749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.697906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.697940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.698860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.698894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.699960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.700947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.701065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.583 [2024-11-18 18:44:42.701099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.583 qpair failed and we were unable to recover it. 00:37:44.583 [2024-11-18 18:44:42.701245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.701281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.701397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.701444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.701604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.701651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.701811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.701845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.701976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.702143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.702323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.702518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.702657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.702828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.702861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.703831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.703990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.704840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.704981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.705926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.705960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.706884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.706918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.707032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.707165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.707199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.584 qpair failed and we were unable to recover it. 00:37:44.584 [2024-11-18 18:44:42.707297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.584 [2024-11-18 18:44:42.707331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.707462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.707495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.707619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.707655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.707762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.707796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.707932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.707966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.708942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.708983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.709123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.709158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.709298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.709333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.709470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.709505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.709640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.709695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.709820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.709861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.710923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.710956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.711955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.711991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.712944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.712977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.713118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.713154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.713291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.713327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.713451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.585 [2024-11-18 18:44:42.713486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.585 qpair failed and we were unable to recover it. 00:37:44.585 [2024-11-18 18:44:42.713622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.713657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.713834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.713882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.714901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.714935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.715874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.715907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.716926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.717878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.717912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.718933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.718967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.719139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.719310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.719480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.719654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.719807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.586 [2024-11-18 18:44:42.719979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.586 [2024-11-18 18:44:42.720017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.586 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.720949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.720982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.721944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.721979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.722917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.722951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.723964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.723998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.724143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.724314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.724513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.724677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.724831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.724973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.725147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.725317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.725450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.725638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.725823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.725857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.726001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.726035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.726201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.587 [2024-11-18 18:44:42.726334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.587 [2024-11-18 18:44:42.726367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.587 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.726502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.726536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.726666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.726700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.726805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.726840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.727959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.727994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.728942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.728990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.729937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.729977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.730922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.730958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.731095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.731131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.731303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.731338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.731478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.731524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.731662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.588 [2024-11-18 18:44:42.731697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.588 qpair failed and we were unable to recover it. 00:37:44.588 [2024-11-18 18:44:42.731820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.731855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.731961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.731996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.732166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.732338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.732511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.732657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.732832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.732972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.733944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.733980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.734948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.734981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.735185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.735375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.735519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.735690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.735859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.735994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.736185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.736358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.736498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.736644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.736834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.736868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.589 [2024-11-18 18:44:42.737953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.589 [2024-11-18 18:44:42.737987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.589 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.738944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.738978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.739980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.740927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.741857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.741891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.742848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.742883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.743882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.743981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.744015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.590 qpair failed and we were unable to recover it. 00:37:44.590 [2024-11-18 18:44:42.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.590 [2024-11-18 18:44:42.744180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.744338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.744372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.744511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.744544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.744703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.744737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.744905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.745835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.745868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.746910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.746943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.747127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.747165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.747372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.747409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.747557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.747593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.747747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.747793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.747950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.747998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.748189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.748243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.748359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.748393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.748511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.748545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.748689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.748724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.748900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.748934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.749873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.749907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.750096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.750340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.750511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.750691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.591 [2024-11-18 18:44:42.750827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.591 qpair failed and we were unable to recover it. 00:37:44.591 [2024-11-18 18:44:42.750941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.750977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.751947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.751981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.752112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.752277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.752487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.752690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.752847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.752975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.753140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.753513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.753720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.753858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.753891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.754948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.755115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.755261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.755446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.755656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.755826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.755969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.756161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.756309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.756526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.756706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.756868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.756901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.757054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.757091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.757271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.757308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.592 [2024-11-18 18:44:42.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.592 [2024-11-18 18:44:42.757536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.592 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.757692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.757728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.757888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.757929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.758054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.758089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.758222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.758259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.758464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.758516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.758667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.758706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.758853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.758889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.759060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.759095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.759236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.759273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.759460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.759499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.759644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.759709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.759857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.759914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.760099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.760136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.760271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.760309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.760499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.760675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.760711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.760834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.760869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.761907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.761940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.762946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.762982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.763086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.763120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.763221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.763255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.763366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.763401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.763547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.593 [2024-11-18 18:44:42.763581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.593 qpair failed and we were unable to recover it. 00:37:44.593 [2024-11-18 18:44:42.763720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.763753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.763921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.763954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.764853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.764887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.765132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.765287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.765500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.765673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.765839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.765980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.766963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.766996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.767156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.767309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.767513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.767679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.767888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.767998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.768181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.768355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.768553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.768707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.768859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.768893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.769859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.594 [2024-11-18 18:44:42.769996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.594 [2024-11-18 18:44:42.770030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.594 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.770150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.770187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.770361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.770398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.770547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.770585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.770733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.770769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.770899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.770933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.771882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.771916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.772894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.772927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.773953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.773986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.774187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.774337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.774496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.774713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.774875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.774982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.775145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.775332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.775520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.775910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.775963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.776117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.776174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.776319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.776354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.776468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.776502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.595 qpair failed and we were unable to recover it. 00:37:44.595 [2024-11-18 18:44:42.776621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.595 [2024-11-18 18:44:42.776655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.776818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.776851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.776975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.777143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.777290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.777487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.777666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.777840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.777874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.778929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.778962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.779097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.779131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.779293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.779326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.779436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.779469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.779642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.779677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.779807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.779860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.780892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.780926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.781861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.781895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.782838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.782980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.596 [2024-11-18 18:44:42.783021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.596 qpair failed and we were unable to recover it. 00:37:44.596 [2024-11-18 18:44:42.783122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.783283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.783471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.783658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.783806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.783952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.783986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.784944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.784977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.785940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.785980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.786086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.786121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.786228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.786263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.786403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.786553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.786591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.786763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.786814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.787941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.788061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.788095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.788243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.788277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.788411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.788445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.788568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.597 [2024-11-18 18:44:42.788601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.597 qpair failed and we were unable to recover it. 00:37:44.597 [2024-11-18 18:44:42.788718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.788752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.788855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.788888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.789899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.789933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.790939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.790974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.791119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.791503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.791688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.791830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.791993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.792199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.792418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.792562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.792728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.792882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.792915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.793953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.793986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.794117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.794150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.794334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.794371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.794584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.794630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.794763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.794797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.794955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.794996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.795165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.795222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.598 [2024-11-18 18:44:42.795337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.598 [2024-11-18 18:44:42.795370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.598 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.795519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.795554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.795732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.795875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.795925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.796918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.797883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.797916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.798912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.798946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.799092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.799262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.799409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.799621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.799802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.799985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.800884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.800995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.801141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.801340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.801491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.801637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.599 [2024-11-18 18:44:42.801816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.599 [2024-11-18 18:44:42.801849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.599 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.801988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.802844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.802978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.803170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.803304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.803464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.803667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.803857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.803895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.804901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.804934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.805877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.806913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.806961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.807943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.807982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.808092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.600 [2024-11-18 18:44:42.808127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.600 qpair failed and we were unable to recover it. 00:37:44.600 [2024-11-18 18:44:42.808315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.808352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.808466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.808500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.808642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.808676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.808792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.808825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.808953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.808986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.809895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.809928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.810884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.810917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.811837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.811996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.812915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.812947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.813872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.813920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.814066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.814101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.814219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.601 [2024-11-18 18:44:42.814254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.601 qpair failed and we were unable to recover it. 00:37:44.601 [2024-11-18 18:44:42.814392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.814427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.814572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.814605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.814746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.814781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.814915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.814962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.815907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.815941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.816951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.816984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.817190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.817355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.817501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.817653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.817837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.817965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.818950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.818998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.819143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.819179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.819293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.819330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.819440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.819473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.819594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.819635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.602 qpair failed and we were unable to recover it. 00:37:44.602 [2024-11-18 18:44:42.819766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.602 [2024-11-18 18:44:42.819801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.819958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.820899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.820932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.821932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.822891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.822924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.823964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.823997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.824873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.824905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.825049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.825217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.825382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.825525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.603 [2024-11-18 18:44:42.825691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.603 qpair failed and we were unable to recover it. 00:37:44.603 [2024-11-18 18:44:42.825797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.825831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.825939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.825979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.826961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.826994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.827159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.827326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.827468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.827668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.827843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.827988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.828938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.828971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.829855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.829890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.830924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.830958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.831102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.831239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.831404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.831567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.604 [2024-11-18 18:44:42.831753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.604 qpair failed and we were unable to recover it. 00:37:44.604 [2024-11-18 18:44:42.831930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.831978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.832126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.832163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.832301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.832335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.832506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.832541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.832696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.832744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.832863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.832899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.833949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.833998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.834138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.834172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.834289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.834323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.834456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.834489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.834643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.834698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.834830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.834877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.835833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.835869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.836852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.836991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.837158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.837332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.837655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.837834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.837882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.838033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.838067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.838199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.838232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.605 [2024-11-18 18:44:42.838343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.605 [2024-11-18 18:44:42.838377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.605 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.838507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.838541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.838702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.838749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.838896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.838931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.839883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.840846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.840984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.841155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.841325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.841504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.841711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.841904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.841951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.842858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.842895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.843063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.843226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.843393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.843597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.843783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.606 [2024-11-18 18:44:42.843970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.606 [2024-11-18 18:44:42.844018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.606 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.844160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.844194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.844329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.844362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.844494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.844527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.844689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.844723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.844854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.844901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.845886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.845921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.846899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.846932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.847977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.848883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.848920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.849901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.849938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.850079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.850113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.850251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.850284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.850382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.850414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.850547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.607 [2024-11-18 18:44:42.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.607 qpair failed and we were unable to recover it. 00:37:44.607 [2024-11-18 18:44:42.850860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.850894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.851948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.851982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.852921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.852955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.853833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.853867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.854867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.855949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.855983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.856898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.856933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.857098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.608 [2024-11-18 18:44:42.857224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.608 [2024-11-18 18:44:42.857257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.608 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.857392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.857506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.857538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.857681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.857717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.857874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.857908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.858858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.858984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.859818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.859975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.860898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.860937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.861870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.861903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.862887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.862988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.863137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.863279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.863494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.863693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.609 qpair failed and we were unable to recover it. 00:37:44.609 [2024-11-18 18:44:42.863864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.609 [2024-11-18 18:44:42.863898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.864871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.864904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.865869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.865999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.866188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.866358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.866496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.866849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.866883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.867947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.867979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.868865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.868913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.869062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.869099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.869207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.610 [2024-11-18 18:44:42.869343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.610 [2024-11-18 18:44:42.869378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.610 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.869484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.869518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.869669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.869705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.869813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.869848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.869950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.869983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.870873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.870996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.871867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.871912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:44.870 [2024-11-18 18:44:42.872002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.872035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.872150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.872184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.872296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.872331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-18 18:44:42.872466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-18 18:44:42.872502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.872636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.872676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.872811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.872845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.872986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.873945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.873979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.874956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.874991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.875887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.875990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.876154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.876332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.876503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.876672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.876864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.876900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.877910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.877958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.878072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.878250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.878285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.878431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.878464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.878562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-18 18:44:42.878597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-18 18:44:42.878769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.878806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.878945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.878979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.879916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.879951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.880094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.880128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-18 18:44:42.880234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.880268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 A controller has encountered a failure and is being reset. 00:37:44.872 [2024-11-18 18:44:42.880543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-18 18:44:42.880596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:44.872 [2024-11-18 18:44:42.880635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:44.872 [2024-11-18 18:44:42.880678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:44.872 [2024-11-18 18:44:42.880709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:44.872 [2024-11-18 18:44:42.880735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:44.872 [2024-11-18 18:44:42.880763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:44.872 Unable to reset the controller. 00:37:44.872 [2024-11-18 18:44:43.004538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.872 [2024-11-18 18:44:43.004630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.872 [2024-11-18 18:44:43.004656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.872 [2024-11-18 18:44:43.004679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.872 [2024-11-18 18:44:43.004697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.872 [2024-11-18 18:44:43.007289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:44.872 [2024-11-18 18:44:43.007341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:44.872 [2024-11-18 18:44:43.007391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:44.872 [2024-11-18 18:44:43.007397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 Malloc0 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 [2024-11-18 18:44:43.718734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 [2024-11-18 18:44:43.748642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.438 18:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3140418 00:37:46.004 Controller properly reset. 00:37:51.263 Initializing NVMe Controllers 00:37:51.263 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:51.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:51.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:51.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:51.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:51.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:51.263 Initialization complete. Launching workers. 00:37:51.263 Starting thread on core 1 00:37:51.263 Starting thread on core 2 00:37:51.263 Starting thread on core 3 00:37:51.263 Starting thread on core 0 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:51.263 00:37:51.263 real 0m11.658s 00:37:51.263 user 0m37.245s 00:37:51.263 sys 0m7.612s 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:51.263 ************************************ 00:37:51.263 END TEST nvmf_target_disconnect_tc2 00:37:51.263 ************************************ 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:51.263 rmmod nvme_tcp 00:37:51.263 rmmod nvme_fabrics 00:37:51.263 rmmod nvme_keyring 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3140904 ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3140904 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3140904 ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3140904 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140904 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140904' 00:37:51.263 killing process with pid 3140904 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3140904 00:37:51.263 18:44:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3140904 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.197 18:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:54.095 00:37:54.095 real 0m17.616s 00:37:54.095 user 1m5.120s 00:37:54.095 sys 0m10.203s 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:54.095 ************************************ 00:37:54.095 END TEST nvmf_target_disconnect 00:37:54.095 ************************************ 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:54.095 00:37:54.095 real 7m39.731s 00:37:54.095 user 19m53.729s 00:37:54.095 sys 1m32.990s 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:54.095 18:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.095 ************************************ 00:37:54.095 END TEST nvmf_host 00:37:54.095 ************************************ 00:37:54.095 18:44:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:54.095 18:44:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:54.095 18:44:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:54.095 18:44:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:54.095 18:44:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.095 18:44:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.095 ************************************ 00:37:54.096 START TEST nvmf_target_core_interrupt_mode 00:37:54.096 ************************************ 00:37:54.096 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:54.096 * Looking for test storage... 00:37:54.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:54.096 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:54.096 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:54.096 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:54.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.355 --rc genhtml_branch_coverage=1 00:37:54.355 --rc genhtml_function_coverage=1 00:37:54.355 --rc genhtml_legend=1 00:37:54.355 --rc geninfo_all_blocks=1 00:37:54.355 --rc geninfo_unexecuted_blocks=1 00:37:54.355 00:37:54.355 ' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:54.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.355 --rc genhtml_branch_coverage=1 00:37:54.355 --rc genhtml_function_coverage=1 00:37:54.355 --rc genhtml_legend=1 00:37:54.355 --rc geninfo_all_blocks=1 00:37:54.355 --rc geninfo_unexecuted_blocks=1 00:37:54.355 00:37:54.355 ' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:54.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.355 --rc genhtml_branch_coverage=1 00:37:54.355 --rc genhtml_function_coverage=1 00:37:54.355 --rc genhtml_legend=1 00:37:54.355 --rc geninfo_all_blocks=1 00:37:54.355 --rc geninfo_unexecuted_blocks=1 00:37:54.355 00:37:54.355 ' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:54.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.355 --rc genhtml_branch_coverage=1 00:37:54.355 --rc genhtml_function_coverage=1 00:37:54.355 --rc genhtml_legend=1 00:37:54.355 --rc geninfo_all_blocks=1 00:37:54.355 --rc geninfo_unexecuted_blocks=1 00:37:54.355 00:37:54.355 ' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:54.355 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:54.356 ************************************ 00:37:54.356 START TEST nvmf_abort 00:37:54.356 ************************************ 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:54.356 * Looking for test storage... 00:37:54.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:54.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.356 --rc genhtml_branch_coverage=1 00:37:54.356 --rc genhtml_function_coverage=1 00:37:54.356 --rc genhtml_legend=1 00:37:54.356 --rc geninfo_all_blocks=1 00:37:54.356 --rc geninfo_unexecuted_blocks=1 00:37:54.356 00:37:54.356 ' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:54.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.356 --rc genhtml_branch_coverage=1 00:37:54.356 --rc genhtml_function_coverage=1 00:37:54.356 --rc genhtml_legend=1 00:37:54.356 --rc geninfo_all_blocks=1 00:37:54.356 --rc geninfo_unexecuted_blocks=1 00:37:54.356 00:37:54.356 ' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:54.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.356 --rc genhtml_branch_coverage=1 00:37:54.356 --rc genhtml_function_coverage=1 00:37:54.356 --rc genhtml_legend=1 00:37:54.356 --rc geninfo_all_blocks=1 00:37:54.356 --rc geninfo_unexecuted_blocks=1 00:37:54.356 00:37:54.356 ' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:54.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.356 --rc genhtml_branch_coverage=1 00:37:54.356 --rc genhtml_function_coverage=1 00:37:54.356 --rc genhtml_legend=1 00:37:54.356 --rc geninfo_all_blocks=1 00:37:54.356 --rc geninfo_unexecuted_blocks=1 00:37:54.356 00:37:54.356 ' 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:54.356 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:54.357 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.256 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:56.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:56.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:56.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:56.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.257 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:37:56.515 00:37:56.515 --- 10.0.0.2 ping statistics --- 00:37:56.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.515 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:37:56.515 00:37:56.515 --- 10.0.0.1 ping statistics --- 00:37:56.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.515 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.515 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3143762 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3143762 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3143762 ']' 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.516 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.516 [2024-11-18 18:44:54.835256] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.516 [2024-11-18 18:44:54.837755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:56.516 [2024-11-18 18:44:54.837859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.774 [2024-11-18 18:44:54.988584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:57.032 [2024-11-18 18:44:55.131667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.032 [2024-11-18 18:44:55.131744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.032 [2024-11-18 18:44:55.131774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.032 [2024-11-18 18:44:55.131796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.032 [2024-11-18 18:44:55.131818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.032 [2024-11-18 18:44:55.134558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.032 [2024-11-18 18:44:55.134652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.032 [2024-11-18 18:44:55.134675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:57.290 [2024-11-18 18:44:55.509844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.290 [2024-11-18 18:44:55.510903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:57.290 [2024-11-18 18:44:55.511721] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.290 [2024-11-18 18:44:55.512078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.548 [2024-11-18 18:44:55.867752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.548 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 Malloc0 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 Delay0 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 [2024-11-18 18:44:55.987999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.807 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:58.065 [2024-11-18 18:44:56.187777] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:59.963 Initializing NVMe Controllers 00:37:59.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:59.963 controller IO queue size 128 less than required 00:37:59.963 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:59.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:59.963 Initialization complete. Launching workers. 00:37:59.963 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23053 00:37:59.963 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23110, failed to submit 66 00:37:59.963 success 23053, unsuccessful 57, failed 0 00:38:00.221 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:00.221 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.221 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.221 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.222 rmmod nvme_tcp 00:38:00.222 rmmod nvme_fabrics 00:38:00.222 rmmod nvme_keyring 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3143762 ']' 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3143762 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3143762 ']' 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3143762 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143762 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143762' 00:38:00.222 killing process with pid 3143762 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3143762 00:38:00.222 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3143762 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.602 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.556 00:38:03.556 real 0m9.303s 00:38:03.556 user 0m11.805s 00:38:03.556 sys 0m3.042s 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.556 ************************************ 00:38:03.556 END TEST nvmf_abort 00:38:03.556 ************************************ 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.556 ************************************ 00:38:03.556 START TEST nvmf_ns_hotplug_stress 00:38:03.556 ************************************ 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:03.556 * Looking for test storage... 00:38:03.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:38:03.556 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:03.815 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.816 --rc genhtml_branch_coverage=1 00:38:03.816 --rc genhtml_function_coverage=1 00:38:03.816 --rc genhtml_legend=1 00:38:03.816 --rc geninfo_all_blocks=1 00:38:03.816 --rc geninfo_unexecuted_blocks=1 00:38:03.816 00:38:03.816 ' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.816 --rc genhtml_branch_coverage=1 00:38:03.816 --rc genhtml_function_coverage=1 00:38:03.816 --rc genhtml_legend=1 00:38:03.816 --rc geninfo_all_blocks=1 00:38:03.816 --rc geninfo_unexecuted_blocks=1 00:38:03.816 00:38:03.816 ' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.816 --rc genhtml_branch_coverage=1 00:38:03.816 --rc genhtml_function_coverage=1 00:38:03.816 --rc genhtml_legend=1 00:38:03.816 --rc geninfo_all_blocks=1 00:38:03.816 --rc geninfo_unexecuted_blocks=1 00:38:03.816 00:38:03.816 ' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.816 --rc genhtml_branch_coverage=1 00:38:03.816 --rc genhtml_function_coverage=1 00:38:03.816 --rc genhtml_legend=1 00:38:03.816 --rc geninfo_all_blocks=1 00:38:03.816 --rc geninfo_unexecuted_blocks=1 00:38:03.816 00:38:03.816 ' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.816 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.816 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:05.717 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:05.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:05.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:05.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:05.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.718 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.977 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:05.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:38:05.978 00:38:05.978 --- 10.0.0.2 ping statistics --- 00:38:05.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.978 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:38:05.978 00:38:05.978 --- 10.0.0.1 ping statistics --- 00:38:05.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.978 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3146419 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3146419 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3146419 ']' 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:05.978 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:05.978 [2024-11-18 18:45:04.230538] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:05.978 [2024-11-18 18:45:04.233052] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:38:05.978 [2024-11-18 18:45:04.233149] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.236 [2024-11-18 18:45:04.385397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:06.236 [2024-11-18 18:45:04.525454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.236 [2024-11-18 18:45:04.525538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.236 [2024-11-18 18:45:04.525567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.236 [2024-11-18 18:45:04.525591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.236 [2024-11-18 18:45:04.525627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.236 [2024-11-18 18:45:04.528359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:06.236 [2024-11-18 18:45:04.528446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.236 [2024-11-18 18:45:04.528471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:06.802 [2024-11-18 18:45:04.903194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:06.802 [2024-11-18 18:45:04.904205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:06.802 [2024-11-18 18:45:04.904988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:06.802 [2024-11-18 18:45:04.905321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:07.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:07.318 [2024-11-18 18:45:05.485642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.318 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:07.576 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:07.833 [2024-11-18 18:45:06.150144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.091 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.349 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:08.607 Malloc0 00:38:08.607 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:08.865 Delay0 00:38:08.865 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.122 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:09.380 NULL1 00:38:09.638 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:09.895 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3147226 00:38:09.896 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:09.896 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:09.896 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.153 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.411 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:10.411 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:10.669 true 00:38:10.669 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:10.669 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.927 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:11.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:11.441 true 00:38:11.441 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:11.441 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.006 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.006 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:12.006 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:12.264 true 00:38:12.264 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:12.264 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.829 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.086 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:13.086 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:13.344 true 00:38:13.344 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:13.344 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.602 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.860 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:13.860 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:14.117 true 00:38:14.117 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:14.117 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.375 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.633 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:14.633 18:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:14.891 true 00:38:14.891 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:14.891 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.149 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.407 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:15.407 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:15.664 true 00:38:15.665 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:15.665 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.230 18:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.488 18:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:16.488 18:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:16.745 true 00:38:16.745 18:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:16.746 18:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.003 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.261 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:17.261 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:17.518 true 00:38:17.518 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:17.518 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.776 18:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.033 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:18.034 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:18.291 true 00:38:18.291 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:18.291 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.549 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.807 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:18.807 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:19.065 true 00:38:19.065 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:19.065 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.630 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.630 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:19.630 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:19.888 true 00:38:20.145 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:20.145 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.403 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.660 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:20.660 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:20.917 true 00:38:20.917 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:20.917 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.174 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.432 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:21.432 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:21.690 true 00:38:21.690 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:21.690 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.948 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.207 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:22.207 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:22.465 true 00:38:22.723 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:22.723 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.980 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.238 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:23.238 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:23.495 true 00:38:23.495 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:23.495 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.753 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.011 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:24.011 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:24.269 true 00:38:24.269 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:24.269 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.527 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.783 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:24.783 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:25.041 true 00:38:25.041 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:25.041 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.607 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.607 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:25.607 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:25.865 true 00:38:26.122 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:26.122 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.380 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.638 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:26.638 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:26.895 true 00:38:26.895 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:26.895 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.153 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.410 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:27.410 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:27.668 true 00:38:27.668 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:27.668 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.926 18:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.184 18:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:28.184 18:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:28.750 true 00:38:28.750 18:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:28.750 18:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.750 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.316 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:29.316 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:29.316 true 00:38:29.316 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:29.316 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.880 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.880 18:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:29.880 18:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:30.138 true 00:38:30.395 18:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:30.396 18:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.654 18:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.912 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:30.912 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:31.169 true 00:38:31.169 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:31.169 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.427 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.685 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:31.685 18:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:31.943 true 00:38:31.943 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:31.943 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.201 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.459 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:32.459 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:32.717 true 00:38:32.975 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:32.975 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.234 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.491 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:33.492 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:33.750 true 00:38:33.750 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:33.750 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.007 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.265 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:34.265 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:34.523 true 00:38:34.523 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:34.523 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.780 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.038 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:35.038 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:35.296 true 00:38:35.296 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:35.296 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.862 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.862 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:38:35.862 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:38:36.119 true 00:38:36.476 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:36.476 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.476 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.754 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:38:36.754 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:38:37.011 true 00:38:37.011 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:37.011 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.268 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.525 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:38:37.525 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:38:37.783 true 00:38:38.040 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:38.040 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.298 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.555 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:38:38.555 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:38:38.813 true 00:38:38.813 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:38.813 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.071 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.329 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:38:39.329 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:38:39.586 true 00:38:39.586 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:39.586 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.844 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.101 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:38:40.101 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:38:40.101 Initializing NVMe Controllers 00:38:40.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:40.101 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:38:40.101 Controller IO queue size 128, less than required. 00:38:40.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:40.101 WARNING: Some requested NVMe devices were skipped 00:38:40.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:40.101 Initialization complete. Launching workers. 00:38:40.101 ======================================================== 00:38:40.102 Latency(us) 00:38:40.102 Device Information : IOPS MiB/s Average min max 00:38:40.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16505.87 8.06 7755.03 1990.53 15385.06 00:38:40.102 ======================================================== 00:38:40.102 Total : 16505.87 8.06 7755.03 1990.53 15385.06 00:38:40.102 00:38:40.359 true 00:38:40.359 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147226 00:38:40.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3147226) - No such process 00:38:40.359 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3147226 00:38:40.359 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.617 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:40.875 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:40.875 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:40.875 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:40.875 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.875 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:41.132 null0 00:38:41.132 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.132 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.132 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:41.697 null1 00:38:41.697 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.697 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.697 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:41.697 null2 00:38:41.697 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:41.697 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:41.697 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:41.955 null3 00:38:42.214 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:42.214 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:42.214 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:42.472 null4 00:38:42.472 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:42.472 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:42.472 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:42.731 null5 00:38:42.731 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:42.731 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:42.731 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:42.988 null6 00:38:42.988 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:42.989 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:42.989 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:43.247 null7 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.247 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3151432 3151433 3151435 3151436 3151439 3151441 3151443 3151445 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.248 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.506 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.764 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.765 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.023 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.280 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.281 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.538 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.796 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.054 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.312 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.570 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.828 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.086 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.087 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.652 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.909 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.909 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.909 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.910 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.167 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.424 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.425 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.682 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.941 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.199 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.457 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.715 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.973 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.232 rmmod nvme_tcp 00:38:49.232 rmmod nvme_fabrics 00:38:49.232 rmmod nvme_keyring 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3146419 ']' 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3146419 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3146419 ']' 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3146419 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146419 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146419' 00:38:49.232 killing process with pid 3146419 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3146419 00:38:49.232 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3146419 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.606 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.506 00:38:52.506 real 0m48.977s 00:38:52.506 user 3m19.101s 00:38:52.506 sys 0m23.030s 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:52.506 ************************************ 00:38:52.506 END TEST nvmf_ns_hotplug_stress 00:38:52.506 ************************************ 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.506 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.764 ************************************ 00:38:52.764 START TEST nvmf_delete_subsystem 00:38:52.764 ************************************ 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:52.764 * Looking for test storage... 00:38:52.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.765 00:38:52.765 ' 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:52.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.765 --rc genhtml_branch_coverage=1 00:38:52.765 --rc genhtml_function_coverage=1 00:38:52.765 --rc genhtml_legend=1 00:38:52.765 --rc geninfo_all_blocks=1 00:38:52.765 --rc geninfo_unexecuted_blocks=1 00:38:52.765 00:38:52.765 ' 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.765 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.765 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:54.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.664 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:54.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.923 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:54.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:54.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:38:54.923 00:38:54.923 --- 10.0.0.2 ping statistics --- 00:38:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.923 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:38:54.923 00:38:54.923 --- 10.0.0.1 ping statistics --- 00:38:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.923 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3154315 00:38:54.923 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3154315 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3154315 ']' 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.924 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.924 [2024-11-18 18:45:53.240040] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:54.924 [2024-11-18 18:45:53.242540] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:38:54.924 [2024-11-18 18:45:53.242672] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.182 [2024-11-18 18:45:53.383832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:55.182 [2024-11-18 18:45:53.508720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.182 [2024-11-18 18:45:53.508803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.182 [2024-11-18 18:45:53.508828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.182 [2024-11-18 18:45:53.508847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.182 [2024-11-18 18:45:53.508874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.182 [2024-11-18 18:45:53.511293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.182 [2024-11-18 18:45:53.511301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.746 [2024-11-18 18:45:53.839814] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.746 [2024-11-18 18:45:53.840488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.746 [2024-11-18 18:45:53.840845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 [2024-11-18 18:45:54.220335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 [2024-11-18 18:45:54.240684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 NULL1 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 Delay0 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3154464 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:56.004 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:56.265 [2024-11-18 18:45:54.373869] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:58.237 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:58.237 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.237 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 [2024-11-18 18:45:56.515319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Write completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.237 starting I/O failed: -6 00:38:58.237 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 starting I/O failed: -6 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 starting I/O failed: -6 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 starting I/O failed: -6 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 [2024-11-18 18:45:56.516695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:58.238 Write completed with error (sct=0, sc=8) 00:38:58.238 Read completed with error (sct=0, sc=8) 00:38:59.169 [2024-11-18 18:45:57.480496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Write completed with error (sct=0, sc=8) 00:38:59.427 [2024-11-18 18:45:57.518698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.427 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 [2024-11-18 18:45:57.520325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 [2024-11-18 18:45:57.520987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Read completed with error (sct=0, sc=8) 00:38:59.428 Write completed with error (sct=0, sc=8) 00:38:59.428 [2024-11-18 18:45:57.522660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:59.428 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.428 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:59.428 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3154464 00:38:59.428 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:59.428 Initializing NVMe Controllers 00:38:59.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:59.428 Controller IO queue size 128, less than required. 00:38:59.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:59.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:59.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:59.428 Initialization complete. Launching workers. 00:38:59.428 ======================================================== 00:38:59.428 Latency(us) 00:38:59.428 Device Information : IOPS MiB/s Average min max 00:38:59.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.84 0.08 906934.29 1062.01 1018405.98 00:38:59.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.23 0.09 884568.28 779.27 1019579.56 00:38:59.428 ======================================================== 00:38:59.428 Total : 342.07 0.17 895411.43 779.27 1019579.56 00:38:59.428 00:38:59.428 [2024-11-18 18:45:57.527624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:59.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3154464 00:38:59.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3154464) - No such process 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3154464 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3154464 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3154464 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:59.994 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:59.995 [2024-11-18 18:45:58.044651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3154876 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:38:59.995 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.995 [2024-11-18 18:45:58.153279] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:00.253 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.253 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:00.253 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.818 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.818 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:00.818 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.390 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.390 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:01.390 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.955 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.955 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:01.955 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.521 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.521 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:02.521 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.778 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.778 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:02.778 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:03.036 Initializing NVMe Controllers 00:39:03.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:03.036 Controller IO queue size 128, less than required. 00:39:03.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:03.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:03.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:03.036 Initialization complete. Launching workers. 00:39:03.036 ======================================================== 00:39:03.036 Latency(us) 00:39:03.036 Device Information : IOPS MiB/s Average min max 00:39:03.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006779.09 1000340.82 1043826.95 00:39:03.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006024.68 1000291.00 1043260.98 00:39:03.036 ======================================================== 00:39:03.036 Total : 256.00 0.12 1006401.89 1000291.00 1043826.95 00:39:03.036 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3154876 00:39:03.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3154876) - No such process 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3154876 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.294 rmmod nvme_tcp 00:39:03.294 rmmod nvme_fabrics 00:39:03.294 rmmod nvme_keyring 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3154315 ']' 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3154315 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3154315 ']' 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3154315 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154315 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154315' 00:39:03.553 killing process with pid 3154315 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3154315 00:39:03.553 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3154315 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.928 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.830 00:39:06.830 real 0m14.019s 00:39:06.830 user 0m26.175s 00:39:06.830 sys 0m3.978s 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:06.830 ************************************ 00:39:06.830 END TEST nvmf_delete_subsystem 00:39:06.830 ************************************ 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:06.830 ************************************ 00:39:06.830 START TEST nvmf_host_management 00:39:06.830 ************************************ 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:06.830 * Looking for test storage... 00:39:06.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:39:06.830 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.830 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:06.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.831 --rc genhtml_branch_coverage=1 00:39:06.831 --rc genhtml_function_coverage=1 00:39:06.831 --rc genhtml_legend=1 00:39:06.831 --rc geninfo_all_blocks=1 00:39:06.831 --rc geninfo_unexecuted_blocks=1 00:39:06.831 00:39:06.831 ' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:06.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.831 --rc genhtml_branch_coverage=1 00:39:06.831 --rc genhtml_function_coverage=1 00:39:06.831 --rc genhtml_legend=1 00:39:06.831 --rc geninfo_all_blocks=1 00:39:06.831 --rc geninfo_unexecuted_blocks=1 00:39:06.831 00:39:06.831 ' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:06.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.831 --rc genhtml_branch_coverage=1 00:39:06.831 --rc genhtml_function_coverage=1 00:39:06.831 --rc genhtml_legend=1 00:39:06.831 --rc geninfo_all_blocks=1 00:39:06.831 --rc geninfo_unexecuted_blocks=1 00:39:06.831 00:39:06.831 ' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:06.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.831 --rc genhtml_branch_coverage=1 00:39:06.831 --rc genhtml_function_coverage=1 00:39:06.831 --rc genhtml_legend=1 00:39:06.831 --rc geninfo_all_blocks=1 00:39:06.831 --rc geninfo_unexecuted_blocks=1 00:39:06.831 00:39:06.831 ' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:06.831 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:09.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.361 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:09.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:09.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:09.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:09.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:09.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:39:09.362 00:39:09.362 --- 10.0.0.2 ping statistics --- 00:39:09.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.362 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:09.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:09.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:39:09.362 00:39:09.362 --- 10.0.0.1 ping statistics --- 00:39:09.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.362 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3157343 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3157343 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3157343 ']' 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.362 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:09.362 [2024-11-18 18:46:07.399474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:09.362 [2024-11-18 18:46:07.402263] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:09.362 [2024-11-18 18:46:07.402372] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.362 [2024-11-18 18:46:07.552432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.362 [2024-11-18 18:46:07.680704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.362 [2024-11-18 18:46:07.680773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.363 [2024-11-18 18:46:07.680797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.363 [2024-11-18 18:46:07.680815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.363 [2024-11-18 18:46:07.680834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.363 [2024-11-18 18:46:07.683367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:09.363 [2024-11-18 18:46:07.683418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.363 [2024-11-18 18:46:07.686642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.363 [2024-11-18 18:46:07.686643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:09.929 [2024-11-18 18:46:08.015113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:09.929 [2024-11-18 18:46:08.028931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:09.929 [2024-11-18 18:46:08.029180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:09.929 [2024-11-18 18:46:08.029889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:09.929 [2024-11-18 18:46:08.030212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.187 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.188 [2024-11-18 18:46:08.387744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.188 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.188 Malloc0 00:39:10.188 [2024-11-18 18:46:08.515880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3157579 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3157579 /var/tmp/bdevperf.sock 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3157579 ']' 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:10.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:10.446 { 00:39:10.446 "params": { 00:39:10.446 "name": "Nvme$subsystem", 00:39:10.446 "trtype": "$TEST_TRANSPORT", 00:39:10.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.446 "adrfam": "ipv4", 00:39:10.446 "trsvcid": "$NVMF_PORT", 00:39:10.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.446 "hdgst": ${hdgst:-false}, 00:39:10.446 "ddgst": ${ddgst:-false} 00:39:10.446 }, 00:39:10.446 "method": "bdev_nvme_attach_controller" 00:39:10.446 } 00:39:10.446 EOF 00:39:10.446 )") 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:10.446 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:10.446 "params": { 00:39:10.446 "name": "Nvme0", 00:39:10.446 "trtype": "tcp", 00:39:10.446 "traddr": "10.0.0.2", 00:39:10.446 "adrfam": "ipv4", 00:39:10.446 "trsvcid": "4420", 00:39:10.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.446 "hdgst": false, 00:39:10.446 "ddgst": false 00:39:10.446 }, 00:39:10.446 "method": "bdev_nvme_attach_controller" 00:39:10.446 }' 00:39:10.446 [2024-11-18 18:46:08.636931] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:10.446 [2024-11-18 18:46:08.637067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157579 ] 00:39:10.446 [2024-11-18 18:46:08.773997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.705 [2024-11-18 18:46:08.902003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.270 Running I/O for 10 seconds... 00:39:11.270 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:11.270 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:11.270 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:11.270 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.270 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.271 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.531 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=285 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 285 -ge 100 ']' 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.532 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.532 [2024-11-18 18:46:09.646493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.532 [2024-11-18 18:46:09.646579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.532 [2024-11-18 18:46:09.646628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.532 [2024-11-18 18:46:09.646651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.532 [2024-11-18 18:46:09.646673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.532 [2024-11-18 18:46:09.646695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.532 [2024-11-18 18:46:09.646717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.532 [2024-11-18 18:46:09.646737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.532 [2024-11-18 18:46:09.646757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.649987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:11.532 [2024-11-18 18:46:09.650965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.532 [2024-11-18 18:46:09.651002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.532 [2024-11-18 18:46:09.651046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.532 [2024-11-18 18:46:09.651070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.651948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.651982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.533 [2024-11-18 18:46:09.652059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:11.533 [2024-11-18 18:46:09.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:12 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.533 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.533 [2024-11-18 18:46:09.652445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.533 [2024-11-18 18:46:09.652946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.533 [2024-11-18 18:46:09.652971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.652993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.653987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.654012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.654034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.654058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.654080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.654104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.654160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.534 [2024-11-18 18:46:09.654183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.534 [2024-11-18 18:46:09.654205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:39:11.534 [2024-11-18 18:46:09.655863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:11.534 task offset: 40960 on job bdev=Nvme0n1 fails 00:39:11.534 00:39:11.534 Latency(us) 00:39:11.534 [2024-11-18T17:46:09.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:11.534 Job: Nvme0n1 ended in about 0.28 seconds with error 00:39:11.534 Verification LBA range: start 0x0 length 0x400 00:39:11.534 Nvme0n1 : 0.28 1135.38 70.96 227.08 0.00 45104.99 8058.50 41166.32 00:39:11.534 [2024-11-18T17:46:09.871Z] =================================================================================================================== 00:39:11.534 [2024-11-18T17:46:09.871Z] Total : 1135.38 70.96 227.08 0.00 45104.99 8058.50 41166.32 00:39:11.534 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.534 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:11.534 [2024-11-18 18:46:09.661258] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:11.534 [2024-11-18 18:46:09.661323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:11.534 [2024-11-18 18:46:09.793855] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3157579 00:39:12.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3157579) - No such process 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:12.468 { 00:39:12.468 "params": { 00:39:12.468 "name": "Nvme$subsystem", 00:39:12.468 "trtype": "$TEST_TRANSPORT", 00:39:12.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.468 "adrfam": "ipv4", 00:39:12.468 "trsvcid": "$NVMF_PORT", 00:39:12.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.468 "hdgst": ${hdgst:-false}, 00:39:12.468 "ddgst": ${ddgst:-false} 00:39:12.468 }, 00:39:12.468 "method": "bdev_nvme_attach_controller" 00:39:12.468 } 00:39:12.468 EOF 00:39:12.468 )") 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:12.468 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:12.468 "params": { 00:39:12.468 "name": "Nvme0", 00:39:12.468 "trtype": "tcp", 00:39:12.468 "traddr": "10.0.0.2", 00:39:12.468 "adrfam": "ipv4", 00:39:12.468 "trsvcid": "4420", 00:39:12.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.468 "hdgst": false, 00:39:12.468 "ddgst": false 00:39:12.468 }, 00:39:12.468 "method": "bdev_nvme_attach_controller" 00:39:12.468 }' 00:39:12.468 [2024-11-18 18:46:10.751240] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:12.468 [2024-11-18 18:46:10.751394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157789 ] 00:39:12.727 [2024-11-18 18:46:10.899988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.727 [2024-11-18 18:46:11.030300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.293 Running I/O for 1 seconds... 00:39:14.227 1344.00 IOPS, 84.00 MiB/s 00:39:14.227 Latency(us) 00:39:14.227 [2024-11-18T17:46:12.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.227 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:14.227 Verification LBA range: start 0x0 length 0x400 00:39:14.227 Nvme0n1 : 1.02 1384.81 86.55 0.00 0.00 45426.79 7670.14 40777.96 00:39:14.227 [2024-11-18T17:46:12.564Z] =================================================================================================================== 00:39:14.227 [2024-11-18T17:46:12.564Z] Total : 1384.81 86.55 0.00 0.00 45426.79 7670.14 40777.96 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:15.160 rmmod nvme_tcp 00:39:15.160 rmmod nvme_fabrics 00:39:15.160 rmmod nvme_keyring 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3157343 ']' 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3157343 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3157343 ']' 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3157343 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157343 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157343' 00:39:15.160 killing process with pid 3157343 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3157343 00:39:15.160 18:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3157343 00:39:16.533 [2024-11-18 18:46:14.718790] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.533 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:19.059 00:39:19.059 real 0m11.919s 00:39:19.059 user 0m25.744s 00:39:19.059 sys 0m4.628s 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:19.059 ************************************ 00:39:19.059 END TEST nvmf_host_management 00:39:19.059 ************************************ 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:19.059 ************************************ 00:39:19.059 START TEST nvmf_lvol 00:39:19.059 ************************************ 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:19.059 * Looking for test storage... 00:39:19.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:19.059 18:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:19.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.059 --rc genhtml_branch_coverage=1 00:39:19.059 --rc genhtml_function_coverage=1 00:39:19.059 --rc genhtml_legend=1 00:39:19.059 --rc geninfo_all_blocks=1 00:39:19.059 --rc geninfo_unexecuted_blocks=1 00:39:19.059 00:39:19.059 ' 00:39:19.059 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:19.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.060 --rc genhtml_branch_coverage=1 00:39:19.060 --rc genhtml_function_coverage=1 00:39:19.060 --rc genhtml_legend=1 00:39:19.060 --rc geninfo_all_blocks=1 00:39:19.060 --rc geninfo_unexecuted_blocks=1 00:39:19.060 00:39:19.060 ' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:19.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.060 --rc genhtml_branch_coverage=1 00:39:19.060 --rc genhtml_function_coverage=1 00:39:19.060 --rc genhtml_legend=1 00:39:19.060 --rc geninfo_all_blocks=1 00:39:19.060 --rc geninfo_unexecuted_blocks=1 00:39:19.060 00:39:19.060 ' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:19.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.060 --rc genhtml_branch_coverage=1 00:39:19.060 --rc genhtml_function_coverage=1 00:39:19.060 --rc genhtml_legend=1 00:39:19.060 --rc geninfo_all_blocks=1 00:39:19.060 --rc geninfo_unexecuted_blocks=1 00:39:19.060 00:39:19.060 ' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:19.060 18:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.960 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:20.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:20.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:20.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:20.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.961 18:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:39:20.961 00:39:20.961 --- 10.0.0.2 ping statistics --- 00:39:20.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.961 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:39:20.961 00:39:20.961 --- 10.0.0.1 ping statistics --- 00:39:20.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.961 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3160246 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3160246 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3160246 ']' 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:20.961 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:20.961 [2024-11-18 18:46:19.215003] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.961 [2024-11-18 18:46:19.217454] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:20.962 [2024-11-18 18:46:19.217562] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:21.219 [2024-11-18 18:46:19.358628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:21.219 [2024-11-18 18:46:19.480153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:21.219 [2024-11-18 18:46:19.480226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:21.219 [2024-11-18 18:46:19.480249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:21.219 [2024-11-18 18:46:19.480266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:21.219 [2024-11-18 18:46:19.480284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:21.219 [2024-11-18 18:46:19.482641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.220 [2024-11-18 18:46:19.482677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.220 [2024-11-18 18:46:19.482687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.785 [2024-11-18 18:46:19.822759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:21.785 [2024-11-18 18:46:19.823865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:21.785 [2024-11-18 18:46:19.824680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:21.785 [2024-11-18 18:46:19.825018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.044 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:22.302 [2024-11-18 18:46:20.479734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.302 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:22.560 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:22.560 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:23.126 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:23.126 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:23.385 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:23.643 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5a48880d-73f6-40f5-a3dd-8e524fb983ab 00:39:23.643 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a48880d-73f6-40f5-a3dd-8e524fb983ab lvol 20 00:39:23.901 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fc730ff6-5a55-49b7-b569-c840176201e2 00:39:23.901 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:24.158 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc730ff6-5a55-49b7-b569-c840176201e2 00:39:24.416 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:24.674 [2024-11-18 18:46:22.831862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.674 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.933 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3160684 00:39:24.933 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:24.933 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:25.867 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fc730ff6-5a55-49b7-b569-c840176201e2 MY_SNAPSHOT 00:39:26.446 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1dc793af-7f1e-4205-941c-fd457f227ef6 00:39:26.446 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fc730ff6-5a55-49b7-b569-c840176201e2 30 00:39:26.799 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1dc793af-7f1e-4205-941c-fd457f227ef6 MY_CLONE 00:39:26.799 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ab32295f-fbfe-4248-9642-c7fdf7ea23ab 00:39:26.799 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ab32295f-fbfe-4248-9642-c7fdf7ea23ab 00:39:27.752 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3160684 00:39:35.862 Initializing NVMe Controllers 00:39:35.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:35.862 Controller IO queue size 128, less than required. 00:39:35.862 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:35.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:35.862 Initialization complete. Launching workers. 00:39:35.862 ======================================================== 00:39:35.862 Latency(us) 00:39:35.862 Device Information : IOPS MiB/s Average min max 00:39:35.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8145.60 31.82 15723.36 369.80 195795.87 00:39:35.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8190.80 32.00 15635.59 3527.64 142947.48 00:39:35.862 ======================================================== 00:39:35.862 Total : 16336.40 63.81 15679.35 369.80 195795.87 00:39:35.862 00:39:35.862 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:35.862 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc730ff6-5a55-49b7-b569-c840176201e2 00:39:36.120 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a48880d-73f6-40f5-a3dd-8e524fb983ab 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.378 rmmod nvme_tcp 00:39:36.378 rmmod nvme_fabrics 00:39:36.378 rmmod nvme_keyring 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3160246 ']' 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3160246 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3160246 ']' 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3160246 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160246 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160246' 00:39:36.378 killing process with pid 3160246 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3160246 00:39:36.378 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3160246 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.278 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:40.180 00:39:40.180 real 0m21.390s 00:39:40.180 user 0m58.793s 00:39:40.180 sys 0m7.623s 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:40.180 ************************************ 00:39:40.180 END TEST nvmf_lvol 00:39:40.180 ************************************ 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:40.180 ************************************ 00:39:40.180 START TEST nvmf_lvs_grow 00:39:40.180 ************************************ 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:40.180 * Looking for test storage... 00:39:40.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.180 --rc genhtml_branch_coverage=1 00:39:40.180 --rc genhtml_function_coverage=1 00:39:40.180 --rc genhtml_legend=1 00:39:40.180 --rc geninfo_all_blocks=1 00:39:40.180 --rc geninfo_unexecuted_blocks=1 00:39:40.180 00:39:40.180 ' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.180 --rc genhtml_branch_coverage=1 00:39:40.180 --rc genhtml_function_coverage=1 00:39:40.180 --rc genhtml_legend=1 00:39:40.180 --rc geninfo_all_blocks=1 00:39:40.180 --rc geninfo_unexecuted_blocks=1 00:39:40.180 00:39:40.180 ' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.180 --rc genhtml_branch_coverage=1 00:39:40.180 --rc genhtml_function_coverage=1 00:39:40.180 --rc genhtml_legend=1 00:39:40.180 --rc geninfo_all_blocks=1 00:39:40.180 --rc geninfo_unexecuted_blocks=1 00:39:40.180 00:39:40.180 ' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.180 --rc genhtml_branch_coverage=1 00:39:40.180 --rc genhtml_function_coverage=1 00:39:40.180 --rc genhtml_legend=1 00:39:40.180 --rc geninfo_all_blocks=1 00:39:40.180 --rc geninfo_unexecuted_blocks=1 00:39:40.180 00:39:40.180 ' 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:40.180 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:40.181 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:40.439 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:42.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:42.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:42.349 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:42.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:42.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:42.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:42.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:39:42.350 00:39:42.350 --- 10.0.0.2 ping statistics --- 00:39:42.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.350 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:42.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:42.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:39:42.350 00:39:42.350 --- 10.0.0.1 ping statistics --- 00:39:42.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.350 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3164183 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3164183 00:39:42.350 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3164183 ']' 00:39:42.608 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.608 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:42.608 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.608 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:42.609 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:42.609 [2024-11-18 18:46:40.769146] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:42.609 [2024-11-18 18:46:40.771834] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:42.609 [2024-11-18 18:46:40.771943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.609 [2024-11-18 18:46:40.924469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:42.867 [2024-11-18 18:46:41.062120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:42.867 [2024-11-18 18:46:41.062189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:42.867 [2024-11-18 18:46:41.062219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:42.867 [2024-11-18 18:46:41.062241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:42.867 [2024-11-18 18:46:41.062265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:42.867 [2024-11-18 18:46:41.063929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.125 [2024-11-18 18:46:41.416438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:43.125 [2024-11-18 18:46:41.416837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:43.384 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:43.384 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:43.384 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:43.384 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:43.384 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:43.642 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.642 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:43.642 [2024-11-18 18:46:41.969015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:43.900 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:43.900 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:43.900 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:43.900 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:43.900 ************************************ 00:39:43.900 START TEST lvs_grow_clean 00:39:43.900 ************************************ 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:43.900 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:44.158 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:44.158 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:44.417 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:39:44.417 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:39:44.417 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:44.675 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:44.675 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:44.675 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce lvol 150 00:39:44.934 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc4b8081-cfb9-45f7-a45e-77e976f99790 00:39:44.934 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:44.934 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:45.193 [2024-11-18 18:46:43.444873] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:45.193 [2024-11-18 18:46:43.445010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:45.193 true 00:39:45.193 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:39:45.193 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:45.452 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:45.452 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:45.711 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc4b8081-cfb9-45f7-a45e-77e976f99790 00:39:45.970 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:46.228 [2024-11-18 18:46:44.557229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.487 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3164723 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3164723 /var/tmp/bdevperf.sock 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3164723 ']' 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:46.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:46.745 18:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:46.745 [2024-11-18 18:46:44.935463] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:46.745 [2024-11-18 18:46:44.935631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164723 ] 00:39:46.745 [2024-11-18 18:46:45.074621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.003 [2024-11-18 18:46:45.205484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.938 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:47.938 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:47.938 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:48.196 Nvme0n1 00:39:48.196 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:48.454 [ 00:39:48.454 { 00:39:48.454 "name": "Nvme0n1", 00:39:48.454 "aliases": [ 00:39:48.454 "dc4b8081-cfb9-45f7-a45e-77e976f99790" 00:39:48.454 ], 00:39:48.454 "product_name": "NVMe disk", 00:39:48.454 "block_size": 4096, 00:39:48.454 "num_blocks": 38912, 00:39:48.454 "uuid": "dc4b8081-cfb9-45f7-a45e-77e976f99790", 00:39:48.454 "numa_id": 0, 00:39:48.454 "assigned_rate_limits": { 00:39:48.454 "rw_ios_per_sec": 0, 00:39:48.454 "rw_mbytes_per_sec": 0, 00:39:48.454 "r_mbytes_per_sec": 0, 00:39:48.454 "w_mbytes_per_sec": 0 00:39:48.454 }, 00:39:48.454 "claimed": false, 00:39:48.454 "zoned": false, 00:39:48.454 "supported_io_types": { 00:39:48.454 "read": true, 00:39:48.454 "write": true, 00:39:48.454 "unmap": true, 00:39:48.454 "flush": true, 00:39:48.454 "reset": true, 00:39:48.454 "nvme_admin": true, 00:39:48.454 "nvme_io": true, 00:39:48.454 "nvme_io_md": false, 00:39:48.454 "write_zeroes": true, 00:39:48.454 "zcopy": false, 00:39:48.454 "get_zone_info": false, 00:39:48.454 "zone_management": false, 00:39:48.454 "zone_append": false, 00:39:48.454 "compare": true, 00:39:48.454 "compare_and_write": true, 00:39:48.454 "abort": true, 00:39:48.454 "seek_hole": false, 00:39:48.454 "seek_data": false, 00:39:48.454 "copy": true, 00:39:48.454 "nvme_iov_md": false 00:39:48.454 }, 00:39:48.454 "memory_domains": [ 00:39:48.454 { 00:39:48.454 "dma_device_id": "system", 00:39:48.454 "dma_device_type": 1 00:39:48.454 } 00:39:48.454 ], 00:39:48.454 "driver_specific": { 00:39:48.454 "nvme": [ 00:39:48.454 { 00:39:48.454 "trid": { 00:39:48.454 "trtype": "TCP", 00:39:48.454 "adrfam": "IPv4", 00:39:48.454 "traddr": "10.0.0.2", 00:39:48.454 "trsvcid": "4420", 00:39:48.454 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:48.454 }, 00:39:48.454 "ctrlr_data": { 00:39:48.454 "cntlid": 1, 00:39:48.454 "vendor_id": "0x8086", 00:39:48.454 "model_number": "SPDK bdev Controller", 00:39:48.454 "serial_number": "SPDK0", 00:39:48.454 "firmware_revision": "25.01", 00:39:48.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:48.454 "oacs": { 00:39:48.454 "security": 0, 00:39:48.454 "format": 0, 00:39:48.454 "firmware": 0, 00:39:48.454 "ns_manage": 0 00:39:48.454 }, 00:39:48.454 "multi_ctrlr": true, 00:39:48.454 "ana_reporting": false 00:39:48.454 }, 00:39:48.454 "vs": { 00:39:48.454 "nvme_version": "1.3" 00:39:48.454 }, 00:39:48.454 "ns_data": { 00:39:48.454 "id": 1, 00:39:48.454 "can_share": true 00:39:48.454 } 00:39:48.454 } 00:39:48.454 ], 00:39:48.454 "mp_policy": "active_passive" 00:39:48.454 } 00:39:48.454 } 00:39:48.454 ] 00:39:48.454 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3164890 00:39:48.454 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:48.454 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:48.454 Running I/O for 10 seconds... 00:39:49.825 Latency(us) 00:39:49.825 [2024-11-18T17:46:48.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.825 Nvme0n1 : 1.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:49.825 [2024-11-18T17:46:48.162Z] =================================================================================================================== 00:39:49.825 [2024-11-18T17:46:48.162Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:49.825 00:39:50.390 18:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:39:50.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.647 Nvme0n1 : 2.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:50.647 [2024-11-18T17:46:48.984Z] =================================================================================================================== 00:39:50.647 [2024-11-18T17:46:48.985Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:50.648 00:39:50.648 true 00:39:50.648 18:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:39:50.648 18:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:51.212 18:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:51.212 18:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:51.212 18:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3164890 00:39:51.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.470 Nvme0n1 : 3.00 10508.67 41.05 0.00 0.00 0.00 0.00 0.00 00:39:51.470 [2024-11-18T17:46:49.807Z] =================================================================================================================== 00:39:51.470 [2024-11-18T17:46:49.807Z] Total : 10508.67 41.05 0.00 0.00 0.00 0.00 0.00 00:39:51.470 00:39:52.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.845 Nvme0n1 : 4.00 10313.50 40.29 0.00 0.00 0.00 0.00 0.00 00:39:52.845 [2024-11-18T17:46:51.182Z] =================================================================================================================== 00:39:52.845 [2024-11-18T17:46:51.182Z] Total : 10313.50 40.29 0.00 0.00 0.00 0.00 0.00 00:39:52.845 00:39:53.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.779 Nvme0n1 : 5.00 10279.60 40.15 0.00 0.00 0.00 0.00 0.00 00:39:53.779 [2024-11-18T17:46:52.116Z] =================================================================================================================== 00:39:53.779 [2024-11-18T17:46:52.116Z] Total : 10279.60 40.15 0.00 0.00 0.00 0.00 0.00 00:39:53.779 00:39:54.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.712 Nvme0n1 : 6.00 10267.67 40.11 0.00 0.00 0.00 0.00 0.00 00:39:54.712 [2024-11-18T17:46:53.049Z] =================================================================================================================== 00:39:54.712 [2024-11-18T17:46:53.049Z] Total : 10267.67 40.11 0.00 0.00 0.00 0.00 0.00 00:39:54.712 00:39:55.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.646 Nvme0n1 : 7.00 10268.29 40.11 0.00 0.00 0.00 0.00 0.00 00:39:55.646 [2024-11-18T17:46:53.983Z] =================================================================================================================== 00:39:55.646 [2024-11-18T17:46:53.983Z] Total : 10268.29 40.11 0.00 0.00 0.00 0.00 0.00 00:39:55.646 00:39:56.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.580 Nvme0n1 : 8.00 10266.75 40.10 0.00 0.00 0.00 0.00 0.00 00:39:56.580 [2024-11-18T17:46:54.917Z] =================================================================================================================== 00:39:56.580 [2024-11-18T17:46:54.917Z] Total : 10266.75 40.10 0.00 0.00 0.00 0.00 0.00 00:39:56.580 00:39:57.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.514 Nvme0n1 : 9.00 10269.11 40.11 0.00 0.00 0.00 0.00 0.00 00:39:57.514 [2024-11-18T17:46:55.851Z] =================================================================================================================== 00:39:57.514 [2024-11-18T17:46:55.851Z] Total : 10269.11 40.11 0.00 0.00 0.00 0.00 0.00 00:39:57.514 00:39:58.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.499 Nvme0n1 : 10.00 10267.80 40.11 0.00 0.00 0.00 0.00 0.00 00:39:58.499 [2024-11-18T17:46:56.836Z] =================================================================================================================== 00:39:58.499 [2024-11-18T17:46:56.836Z] Total : 10267.80 40.11 0.00 0.00 0.00 0.00 0.00 00:39:58.499 00:39:58.499 00:39:58.499 Latency(us) 00:39:58.499 [2024-11-18T17:46:56.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.499 Nvme0n1 : 10.01 10269.07 40.11 0.00 0.00 12452.72 4126.34 24272.59 00:39:58.499 [2024-11-18T17:46:56.836Z] =================================================================================================================== 00:39:58.499 [2024-11-18T17:46:56.836Z] Total : 10269.07 40.11 0.00 0.00 12452.72 4126.34 24272.59 00:39:58.499 { 00:39:58.499 "results": [ 00:39:58.499 { 00:39:58.499 "job": "Nvme0n1", 00:39:58.499 "core_mask": "0x2", 00:39:58.499 "workload": "randwrite", 00:39:58.499 "status": "finished", 00:39:58.499 "queue_depth": 128, 00:39:58.499 "io_size": 4096, 00:39:58.499 "runtime": 10.011229, 00:39:58.499 "iops": 10269.068862574215, 00:39:58.499 "mibps": 40.11355024443053, 00:39:58.499 "io_failed": 0, 00:39:58.499 "io_timeout": 0, 00:39:58.499 "avg_latency_us": 12452.72396189587, 00:39:58.499 "min_latency_us": 4126.34074074074, 00:39:58.499 "max_latency_us": 24272.59259259259 00:39:58.499 } 00:39:58.499 ], 00:39:58.499 "core_count": 1 00:39:58.499 } 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3164723 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3164723 ']' 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3164723 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164723 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164723' 00:39:58.758 killing process with pid 3164723 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3164723 00:39:58.758 Received shutdown signal, test time was about 10.000000 seconds 00:39:58.758 00:39:58.758 Latency(us) 00:39:58.758 [2024-11-18T17:46:57.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.758 [2024-11-18T17:46:57.095Z] =================================================================================================================== 00:39:58.758 [2024-11-18T17:46:57.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:58.758 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3164723 00:39:59.692 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:59.692 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:00.258 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:00.258 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:00.258 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:00.258 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:00.258 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:00.824 [2024-11-18 18:46:58.853052] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:00.824 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:00.824 request: 00:40:00.824 { 00:40:00.824 "uuid": "82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce", 00:40:00.824 "method": "bdev_lvol_get_lvstores", 00:40:00.824 "req_id": 1 00:40:00.824 } 00:40:00.824 Got JSON-RPC error response 00:40:00.824 response: 00:40:00.824 { 00:40:00.824 "code": -19, 00:40:00.824 "message": "No such device" 00:40:00.824 } 00:40:01.084 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:01.084 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:01.084 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:01.084 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:01.084 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:01.351 aio_bdev 00:40:01.351 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc4b8081-cfb9-45f7-a45e-77e976f99790 00:40:01.351 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dc4b8081-cfb9-45f7-a45e-77e976f99790 00:40:01.351 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:01.351 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:01.351 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:01.352 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:01.352 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:01.609 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc4b8081-cfb9-45f7-a45e-77e976f99790 -t 2000 00:40:01.867 [ 00:40:01.867 { 00:40:01.867 "name": "dc4b8081-cfb9-45f7-a45e-77e976f99790", 00:40:01.867 "aliases": [ 00:40:01.867 "lvs/lvol" 00:40:01.867 ], 00:40:01.867 "product_name": "Logical Volume", 00:40:01.867 "block_size": 4096, 00:40:01.867 "num_blocks": 38912, 00:40:01.867 "uuid": "dc4b8081-cfb9-45f7-a45e-77e976f99790", 00:40:01.867 "assigned_rate_limits": { 00:40:01.867 "rw_ios_per_sec": 0, 00:40:01.867 "rw_mbytes_per_sec": 0, 00:40:01.867 "r_mbytes_per_sec": 0, 00:40:01.867 "w_mbytes_per_sec": 0 00:40:01.867 }, 00:40:01.867 "claimed": false, 00:40:01.867 "zoned": false, 00:40:01.867 "supported_io_types": { 00:40:01.867 "read": true, 00:40:01.867 "write": true, 00:40:01.867 "unmap": true, 00:40:01.867 "flush": false, 00:40:01.867 "reset": true, 00:40:01.867 "nvme_admin": false, 00:40:01.867 "nvme_io": false, 00:40:01.867 "nvme_io_md": false, 00:40:01.867 "write_zeroes": true, 00:40:01.867 "zcopy": false, 00:40:01.867 "get_zone_info": false, 00:40:01.867 "zone_management": false, 00:40:01.867 "zone_append": false, 00:40:01.867 "compare": false, 00:40:01.867 "compare_and_write": false, 00:40:01.867 "abort": false, 00:40:01.867 "seek_hole": true, 00:40:01.867 "seek_data": true, 00:40:01.867 "copy": false, 00:40:01.867 "nvme_iov_md": false 00:40:01.867 }, 00:40:01.867 "driver_specific": { 00:40:01.867 "lvol": { 00:40:01.867 "lvol_store_uuid": "82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce", 00:40:01.867 "base_bdev": "aio_bdev", 00:40:01.868 "thin_provision": false, 00:40:01.868 "num_allocated_clusters": 38, 00:40:01.868 "snapshot": false, 00:40:01.868 "clone": false, 00:40:01.868 "esnap_clone": false 00:40:01.868 } 00:40:01.868 } 00:40:01.868 } 00:40:01.868 ] 00:40:01.868 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:01.868 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:01.868 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:02.126 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:02.126 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:02.126 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:02.384 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:02.384 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc4b8081-cfb9-45f7-a45e-77e976f99790 00:40:02.641 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82a1e5a9-929c-4dbc-813e-7ba4ec13a9ce 00:40:03.207 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:03.207 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:03.465 00:40:03.465 real 0m19.545s 00:40:03.465 user 0m18.417s 00:40:03.465 sys 0m2.355s 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:03.465 ************************************ 00:40:03.465 END TEST lvs_grow_clean 00:40:03.465 ************************************ 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:03.465 ************************************ 00:40:03.465 START TEST lvs_grow_dirty 00:40:03.465 ************************************ 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:03.465 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:03.723 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:03.723 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:03.982 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:03.982 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:03.982 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:04.239 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:04.239 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:04.239 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 lvol 150 00:40:04.497 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2250f862-ba69-46f3-9736-437f1193f5d9 00:40:04.497 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:04.497 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:04.755 [2024-11-18 18:47:03.064839] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:04.755 [2024-11-18 18:47:03.064980] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:04.755 true 00:40:04.755 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:04.755 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:05.013 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:05.013 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:05.579 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2250f862-ba69-46f3-9736-437f1193f5d9 00:40:05.579 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:05.837 [2024-11-18 18:47:04.141339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.837 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3167036 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3167036 /var/tmp/bdevperf.sock 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3167036 ']' 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:06.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:06.096 18:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:06.354 [2024-11-18 18:47:04.513693] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:06.354 [2024-11-18 18:47:04.513849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167036 ] 00:40:06.354 [2024-11-18 18:47:04.657503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.612 [2024-11-18 18:47:04.783185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.179 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.179 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:07.179 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:07.745 Nvme0n1 00:40:07.745 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:08.003 [ 00:40:08.003 { 00:40:08.003 "name": "Nvme0n1", 00:40:08.003 "aliases": [ 00:40:08.003 "2250f862-ba69-46f3-9736-437f1193f5d9" 00:40:08.003 ], 00:40:08.003 "product_name": "NVMe disk", 00:40:08.003 "block_size": 4096, 00:40:08.003 "num_blocks": 38912, 00:40:08.003 "uuid": "2250f862-ba69-46f3-9736-437f1193f5d9", 00:40:08.003 "numa_id": 0, 00:40:08.003 "assigned_rate_limits": { 00:40:08.003 "rw_ios_per_sec": 0, 00:40:08.003 "rw_mbytes_per_sec": 0, 00:40:08.003 "r_mbytes_per_sec": 0, 00:40:08.003 "w_mbytes_per_sec": 0 00:40:08.003 }, 00:40:08.003 "claimed": false, 00:40:08.003 "zoned": false, 00:40:08.003 "supported_io_types": { 00:40:08.003 "read": true, 00:40:08.003 "write": true, 00:40:08.003 "unmap": true, 00:40:08.003 "flush": true, 00:40:08.003 "reset": true, 00:40:08.003 "nvme_admin": true, 00:40:08.003 "nvme_io": true, 00:40:08.003 "nvme_io_md": false, 00:40:08.003 "write_zeroes": true, 00:40:08.003 "zcopy": false, 00:40:08.003 "get_zone_info": false, 00:40:08.003 "zone_management": false, 00:40:08.003 "zone_append": false, 00:40:08.003 "compare": true, 00:40:08.003 "compare_and_write": true, 00:40:08.003 "abort": true, 00:40:08.003 "seek_hole": false, 00:40:08.003 "seek_data": false, 00:40:08.003 "copy": true, 00:40:08.003 "nvme_iov_md": false 00:40:08.003 }, 00:40:08.003 "memory_domains": [ 00:40:08.003 { 00:40:08.003 "dma_device_id": "system", 00:40:08.003 "dma_device_type": 1 00:40:08.003 } 00:40:08.003 ], 00:40:08.003 "driver_specific": { 00:40:08.003 "nvme": [ 00:40:08.003 { 00:40:08.003 "trid": { 00:40:08.003 "trtype": "TCP", 00:40:08.003 "adrfam": "IPv4", 00:40:08.003 "traddr": "10.0.0.2", 00:40:08.003 "trsvcid": "4420", 00:40:08.003 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:08.003 }, 00:40:08.003 "ctrlr_data": { 00:40:08.003 "cntlid": 1, 00:40:08.003 "vendor_id": "0x8086", 00:40:08.003 "model_number": "SPDK bdev Controller", 00:40:08.003 "serial_number": "SPDK0", 00:40:08.003 "firmware_revision": "25.01", 00:40:08.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.003 "oacs": { 00:40:08.003 "security": 0, 00:40:08.003 "format": 0, 00:40:08.003 "firmware": 0, 00:40:08.003 "ns_manage": 0 00:40:08.003 }, 00:40:08.003 "multi_ctrlr": true, 00:40:08.003 "ana_reporting": false 00:40:08.003 }, 00:40:08.003 "vs": { 00:40:08.003 "nvme_version": "1.3" 00:40:08.003 }, 00:40:08.003 "ns_data": { 00:40:08.003 "id": 1, 00:40:08.003 "can_share": true 00:40:08.003 } 00:40:08.003 } 00:40:08.003 ], 00:40:08.003 "mp_policy": "active_passive" 00:40:08.003 } 00:40:08.003 } 00:40:08.003 ] 00:40:08.003 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3167194 00:40:08.003 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:08.003 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:08.003 Running I/O for 10 seconds... 00:40:08.938 Latency(us) 00:40:08.938 [2024-11-18T17:47:07.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.938 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:08.938 [2024-11-18T17:47:07.275Z] =================================================================================================================== 00:40:08.938 [2024-11-18T17:47:07.275Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:08.938 00:40:09.873 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:10.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.133 Nvme0n1 : 2.00 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:40:10.133 [2024-11-18T17:47:08.470Z] =================================================================================================================== 00:40:10.133 [2024-11-18T17:47:08.470Z] Total : 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:40:10.133 00:40:10.133 true 00:40:10.133 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:10.133 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:10.700 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:10.700 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:10.700 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3167194 00:40:10.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.959 Nvme0n1 : 3.00 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:40:10.959 [2024-11-18T17:47:09.296Z] =================================================================================================================== 00:40:10.959 [2024-11-18T17:47:09.296Z] Total : 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:40:10.959 00:40:12.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.336 Nvme0n1 : 4.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:12.336 [2024-11-18T17:47:10.673Z] =================================================================================================================== 00:40:12.336 [2024-11-18T17:47:10.673Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:12.336 00:40:13.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.272 Nvme0n1 : 5.00 10718.80 41.87 0.00 0.00 0.00 0.00 0.00 00:40:13.272 [2024-11-18T17:47:11.609Z] =================================================================================================================== 00:40:13.272 [2024-11-18T17:47:11.609Z] Total : 10718.80 41.87 0.00 0.00 0.00 0.00 0.00 00:40:13.272 00:40:14.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.207 Nvme0n1 : 6.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:14.207 [2024-11-18T17:47:12.544Z] =================================================================================================================== 00:40:14.207 [2024-11-18T17:47:12.544Z] Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:14.207 00:40:15.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.142 Nvme0n1 : 7.00 10894.86 42.56 0.00 0.00 0.00 0.00 0.00 00:40:15.142 [2024-11-18T17:47:13.479Z] =================================================================================================================== 00:40:15.142 [2024-11-18T17:47:13.479Z] Total : 10894.86 42.56 0.00 0.00 0.00 0.00 0.00 00:40:15.142 00:40:16.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:16.078 Nvme0n1 : 8.00 10906.12 42.60 0.00 0.00 0.00 0.00 0.00 00:40:16.078 [2024-11-18T17:47:14.415Z] =================================================================================================================== 00:40:16.078 [2024-11-18T17:47:14.415Z] Total : 10906.12 42.60 0.00 0.00 0.00 0.00 0.00 00:40:16.078 00:40:17.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.013 Nvme0n1 : 9.00 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:40:17.013 [2024-11-18T17:47:15.350Z] =================================================================================================================== 00:40:17.013 [2024-11-18T17:47:15.350Z] Total : 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:40:17.013 00:40:17.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.982 Nvme0n1 : 10.00 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:40:17.982 [2024-11-18T17:47:16.319Z] =================================================================================================================== 00:40:17.982 [2024-11-18T17:47:16.319Z] Total : 10922.00 42.66 0.00 0.00 0.00 0.00 0.00 00:40:17.982 00:40:17.982 00:40:17.982 Latency(us) 00:40:17.982 [2024-11-18T17:47:16.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.982 Nvme0n1 : 10.01 10928.31 42.69 0.00 0.00 11705.71 9757.58 26020.22 00:40:17.982 [2024-11-18T17:47:16.319Z] =================================================================================================================== 00:40:17.982 [2024-11-18T17:47:16.319Z] Total : 10928.31 42.69 0.00 0.00 11705.71 9757.58 26020.22 00:40:17.982 { 00:40:17.982 "results": [ 00:40:17.982 { 00:40:17.982 "job": "Nvme0n1", 00:40:17.982 "core_mask": "0x2", 00:40:17.982 "workload": "randwrite", 00:40:17.982 "status": "finished", 00:40:17.982 "queue_depth": 128, 00:40:17.983 "io_size": 4096, 00:40:17.983 "runtime": 10.005941, 00:40:17.983 "iops": 10928.307492518694, 00:40:17.983 "mibps": 42.68870114265115, 00:40:17.983 "io_failed": 0, 00:40:17.983 "io_timeout": 0, 00:40:17.983 "avg_latency_us": 11705.706390497751, 00:40:17.983 "min_latency_us": 9757.582222222221, 00:40:17.983 "max_latency_us": 26020.21925925926 00:40:17.983 } 00:40:17.983 ], 00:40:17.983 "core_count": 1 00:40:17.983 } 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3167036 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3167036 ']' 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3167036 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:17.983 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167036 00:40:18.239 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:18.239 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:18.239 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167036' 00:40:18.239 killing process with pid 3167036 00:40:18.239 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3167036 00:40:18.239 Received shutdown signal, test time was about 10.000000 seconds 00:40:18.239 00:40:18.239 Latency(us) 00:40:18.239 [2024-11-18T17:47:16.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.239 [2024-11-18T17:47:16.576Z] =================================================================================================================== 00:40:18.239 [2024-11-18T17:47:16.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:18.239 18:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3167036 00:40:19.170 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:19.170 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:19.733 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:19.733 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:19.733 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:19.733 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:19.733 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3164183 00:40:19.733 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3164183 00:40:19.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3164183 Killed "${NVMF_APP[@]}" "$@" 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3168641 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3168641 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3168641 ']' 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.990 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:19.990 [2024-11-18 18:47:18.209177] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.990 [2024-11-18 18:47:18.211908] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:19.990 [2024-11-18 18:47:18.212012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.248 [2024-11-18 18:47:18.366189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.248 [2024-11-18 18:47:18.499955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.248 [2024-11-18 18:47:18.500032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.248 [2024-11-18 18:47:18.500063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.248 [2024-11-18 18:47:18.500085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.248 [2024-11-18 18:47:18.500109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.248 [2024-11-18 18:47:18.501755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.815 [2024-11-18 18:47:18.875868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.815 [2024-11-18 18:47:18.876281] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:21.071 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:21.330 [2024-11-18 18:47:19.501675] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:21.330 [2024-11-18 18:47:19.501892] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:21.330 [2024-11-18 18:47:19.501979] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2250f862-ba69-46f3-9736-437f1193f5d9 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2250f862-ba69-46f3-9736-437f1193f5d9 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:21.330 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:21.331 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:21.590 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2250f862-ba69-46f3-9736-437f1193f5d9 -t 2000 00:40:21.848 [ 00:40:21.848 { 00:40:21.848 "name": "2250f862-ba69-46f3-9736-437f1193f5d9", 00:40:21.848 "aliases": [ 00:40:21.848 "lvs/lvol" 00:40:21.848 ], 00:40:21.848 "product_name": "Logical Volume", 00:40:21.848 "block_size": 4096, 00:40:21.848 "num_blocks": 38912, 00:40:21.848 "uuid": "2250f862-ba69-46f3-9736-437f1193f5d9", 00:40:21.848 "assigned_rate_limits": { 00:40:21.848 "rw_ios_per_sec": 0, 00:40:21.848 "rw_mbytes_per_sec": 0, 00:40:21.848 "r_mbytes_per_sec": 0, 00:40:21.848 "w_mbytes_per_sec": 0 00:40:21.848 }, 00:40:21.848 "claimed": false, 00:40:21.848 "zoned": false, 00:40:21.848 "supported_io_types": { 00:40:21.848 "read": true, 00:40:21.848 "write": true, 00:40:21.848 "unmap": true, 00:40:21.848 "flush": false, 00:40:21.848 "reset": true, 00:40:21.848 "nvme_admin": false, 00:40:21.848 "nvme_io": false, 00:40:21.848 "nvme_io_md": false, 00:40:21.848 "write_zeroes": true, 00:40:21.848 "zcopy": false, 00:40:21.848 "get_zone_info": false, 00:40:21.848 "zone_management": false, 00:40:21.848 "zone_append": false, 00:40:21.848 "compare": false, 00:40:21.848 "compare_and_write": false, 00:40:21.848 "abort": false, 00:40:21.848 "seek_hole": true, 00:40:21.848 "seek_data": true, 00:40:21.848 "copy": false, 00:40:21.848 "nvme_iov_md": false 00:40:21.848 }, 00:40:21.848 "driver_specific": { 00:40:21.848 "lvol": { 00:40:21.848 "lvol_store_uuid": "09c6992a-16ab-4512-8c92-afd8f2a63e12", 00:40:21.848 "base_bdev": "aio_bdev", 00:40:21.848 "thin_provision": false, 00:40:21.848 "num_allocated_clusters": 38, 00:40:21.848 "snapshot": false, 00:40:21.848 "clone": false, 00:40:21.848 "esnap_clone": false 00:40:21.848 } 00:40:21.848 } 00:40:21.848 } 00:40:21.848 ] 00:40:21.848 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:21.848 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:21.848 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:22.106 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:22.106 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:22.106 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:22.672 [2024-11-18 18:47:20.954807] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:22.672 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:22.960 request: 00:40:22.960 { 00:40:22.960 "uuid": "09c6992a-16ab-4512-8c92-afd8f2a63e12", 00:40:22.960 "method": "bdev_lvol_get_lvstores", 00:40:22.960 "req_id": 1 00:40:22.960 } 00:40:22.960 Got JSON-RPC error response 00:40:22.960 response: 00:40:22.960 { 00:40:22.960 "code": -19, 00:40:22.960 "message": "No such device" 00:40:22.960 } 00:40:22.960 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:22.960 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:22.961 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:22.961 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:22.961 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:23.261 aio_bdev 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2250f862-ba69-46f3-9736-437f1193f5d9 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2250f862-ba69-46f3-9736-437f1193f5d9 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:23.261 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:23.519 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2250f862-ba69-46f3-9736-437f1193f5d9 -t 2000 00:40:24.085 [ 00:40:24.085 { 00:40:24.085 "name": "2250f862-ba69-46f3-9736-437f1193f5d9", 00:40:24.085 "aliases": [ 00:40:24.085 "lvs/lvol" 00:40:24.085 ], 00:40:24.085 "product_name": "Logical Volume", 00:40:24.085 "block_size": 4096, 00:40:24.085 "num_blocks": 38912, 00:40:24.085 "uuid": "2250f862-ba69-46f3-9736-437f1193f5d9", 00:40:24.085 "assigned_rate_limits": { 00:40:24.085 "rw_ios_per_sec": 0, 00:40:24.085 "rw_mbytes_per_sec": 0, 00:40:24.085 "r_mbytes_per_sec": 0, 00:40:24.085 "w_mbytes_per_sec": 0 00:40:24.085 }, 00:40:24.085 "claimed": false, 00:40:24.085 "zoned": false, 00:40:24.085 "supported_io_types": { 00:40:24.085 "read": true, 00:40:24.085 "write": true, 00:40:24.085 "unmap": true, 00:40:24.085 "flush": false, 00:40:24.085 "reset": true, 00:40:24.085 "nvme_admin": false, 00:40:24.085 "nvme_io": false, 00:40:24.085 "nvme_io_md": false, 00:40:24.085 "write_zeroes": true, 00:40:24.085 "zcopy": false, 00:40:24.085 "get_zone_info": false, 00:40:24.085 "zone_management": false, 00:40:24.085 "zone_append": false, 00:40:24.085 "compare": false, 00:40:24.085 "compare_and_write": false, 00:40:24.085 "abort": false, 00:40:24.085 "seek_hole": true, 00:40:24.085 "seek_data": true, 00:40:24.085 "copy": false, 00:40:24.085 "nvme_iov_md": false 00:40:24.085 }, 00:40:24.085 "driver_specific": { 00:40:24.085 "lvol": { 00:40:24.085 "lvol_store_uuid": "09c6992a-16ab-4512-8c92-afd8f2a63e12", 00:40:24.085 "base_bdev": "aio_bdev", 00:40:24.085 "thin_provision": false, 00:40:24.085 "num_allocated_clusters": 38, 00:40:24.085 "snapshot": false, 00:40:24.085 "clone": false, 00:40:24.085 "esnap_clone": false 00:40:24.085 } 00:40:24.085 } 00:40:24.085 } 00:40:24.085 ] 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:24.085 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:24.343 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:24.343 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2250f862-ba69-46f3-9736-437f1193f5d9 00:40:24.910 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09c6992a-16ab-4512-8c92-afd8f2a63e12 00:40:25.168 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:25.427 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:25.427 00:40:25.427 real 0m22.024s 00:40:25.427 user 0m39.145s 00:40:25.427 sys 0m4.852s 00:40:25.427 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:25.428 ************************************ 00:40:25.428 END TEST lvs_grow_dirty 00:40:25.428 ************************************ 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:25.428 nvmf_trace.0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:25.428 rmmod nvme_tcp 00:40:25.428 rmmod nvme_fabrics 00:40:25.428 rmmod nvme_keyring 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3168641 ']' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3168641 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3168641 ']' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3168641 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.428 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168641 00:40:25.686 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.686 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.686 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168641' 00:40:25.686 killing process with pid 3168641 00:40:25.686 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3168641 00:40:25.686 18:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3168641 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.621 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.154 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.154 00:40:29.154 real 0m48.642s 00:40:29.154 user 1m0.819s 00:40:29.154 sys 0m9.321s 00:40:29.154 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.154 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:29.154 ************************************ 00:40:29.154 END TEST nvmf_lvs_grow 00:40:29.154 ************************************ 00:40:29.154 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.154 ************************************ 00:40:29.154 START TEST nvmf_bdev_io_wait 00:40:29.154 ************************************ 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:29.154 * Looking for test storage... 00:40:29.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.154 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.155 --rc genhtml_branch_coverage=1 00:40:29.155 --rc genhtml_function_coverage=1 00:40:29.155 --rc genhtml_legend=1 00:40:29.155 --rc geninfo_all_blocks=1 00:40:29.155 --rc geninfo_unexecuted_blocks=1 00:40:29.155 00:40:29.155 ' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.155 --rc genhtml_branch_coverage=1 00:40:29.155 --rc genhtml_function_coverage=1 00:40:29.155 --rc genhtml_legend=1 00:40:29.155 --rc geninfo_all_blocks=1 00:40:29.155 --rc geninfo_unexecuted_blocks=1 00:40:29.155 00:40:29.155 ' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.155 --rc genhtml_branch_coverage=1 00:40:29.155 --rc genhtml_function_coverage=1 00:40:29.155 --rc genhtml_legend=1 00:40:29.155 --rc geninfo_all_blocks=1 00:40:29.155 --rc geninfo_unexecuted_blocks=1 00:40:29.155 00:40:29.155 ' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.155 --rc genhtml_branch_coverage=1 00:40:29.155 --rc genhtml_function_coverage=1 00:40:29.155 --rc genhtml_legend=1 00:40:29.155 --rc geninfo_all_blocks=1 00:40:29.155 --rc geninfo_unexecuted_blocks=1 00:40:29.155 00:40:29.155 ' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.155 18:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:31.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:31.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:31.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:31.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:31.056 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:31.057 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:31.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:40:31.315 00:40:31.315 --- 10.0.0.2 ping statistics --- 00:40:31.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.315 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:40:31.315 00:40:31.315 --- 10.0.0.1 ping statistics --- 00:40:31.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.315 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3171427 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3171427 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3171427 ']' 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:31.315 18:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:31.315 [2024-11-18 18:47:29.621245] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:31.315 [2024-11-18 18:47:29.623919] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:31.315 [2024-11-18 18:47:29.624021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:31.574 [2024-11-18 18:47:29.772655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:31.574 [2024-11-18 18:47:29.897007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:31.574 [2024-11-18 18:47:29.897068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:31.574 [2024-11-18 18:47:29.897091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:31.574 [2024-11-18 18:47:29.897108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:31.574 [2024-11-18 18:47:29.897127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:31.574 [2024-11-18 18:47:29.899479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:31.574 [2024-11-18 18:47:29.899542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:31.574 [2024-11-18 18:47:29.899589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.574 [2024-11-18 18:47:29.899622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:31.574 [2024-11-18 18:47:29.900304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.509 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.767 [2024-11-18 18:47:30.868781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:32.767 [2024-11-18 18:47:30.869920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:32.767 [2024-11-18 18:47:30.871055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:32.767 [2024-11-18 18:47:30.872186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.767 [2024-11-18 18:47:30.880659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.767 Malloc0 00:40:32.767 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.768 18:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.768 [2024-11-18 18:47:31.004854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3171589 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3171591 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:32.768 { 00:40:32.768 "params": { 00:40:32.768 "name": "Nvme$subsystem", 00:40:32.768 "trtype": "$TEST_TRANSPORT", 00:40:32.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:32.768 "adrfam": "ipv4", 00:40:32.768 "trsvcid": "$NVMF_PORT", 00:40:32.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:32.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:32.768 "hdgst": ${hdgst:-false}, 00:40:32.768 "ddgst": ${ddgst:-false} 00:40:32.768 }, 00:40:32.768 "method": "bdev_nvme_attach_controller" 00:40:32.768 } 00:40:32.768 EOF 00:40:32.768 )") 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3171593 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:32.768 { 00:40:32.768 "params": { 00:40:32.768 "name": "Nvme$subsystem", 00:40:32.768 "trtype": "$TEST_TRANSPORT", 00:40:32.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:32.768 "adrfam": "ipv4", 00:40:32.768 "trsvcid": "$NVMF_PORT", 00:40:32.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:32.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:32.768 "hdgst": ${hdgst:-false}, 00:40:32.768 "ddgst": ${ddgst:-false} 00:40:32.768 }, 00:40:32.768 "method": "bdev_nvme_attach_controller" 00:40:32.768 } 00:40:32.768 EOF 00:40:32.768 )") 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3171596 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:32.768 { 00:40:32.768 "params": { 00:40:32.768 "name": "Nvme$subsystem", 00:40:32.768 "trtype": "$TEST_TRANSPORT", 00:40:32.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:32.768 "adrfam": "ipv4", 00:40:32.768 "trsvcid": "$NVMF_PORT", 00:40:32.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:32.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:32.768 "hdgst": ${hdgst:-false}, 00:40:32.768 "ddgst": ${ddgst:-false} 00:40:32.768 }, 00:40:32.768 "method": "bdev_nvme_attach_controller" 00:40:32.768 } 00:40:32.768 EOF 00:40:32.768 )") 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:32.768 { 00:40:32.768 "params": { 00:40:32.768 "name": "Nvme$subsystem", 00:40:32.768 "trtype": "$TEST_TRANSPORT", 00:40:32.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:32.768 "adrfam": "ipv4", 00:40:32.768 "trsvcid": "$NVMF_PORT", 00:40:32.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:32.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:32.768 "hdgst": ${hdgst:-false}, 00:40:32.768 "ddgst": ${ddgst:-false} 00:40:32.768 }, 00:40:32.768 "method": "bdev_nvme_attach_controller" 00:40:32.768 } 00:40:32.768 EOF 00:40:32.768 )") 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3171589 00:40:32.768 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:32.769 "params": { 00:40:32.769 "name": "Nvme1", 00:40:32.769 "trtype": "tcp", 00:40:32.769 "traddr": "10.0.0.2", 00:40:32.769 "adrfam": "ipv4", 00:40:32.769 "trsvcid": "4420", 00:40:32.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:32.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:32.769 "hdgst": false, 00:40:32.769 "ddgst": false 00:40:32.769 }, 00:40:32.769 "method": "bdev_nvme_attach_controller" 00:40:32.769 }' 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:32.769 "params": { 00:40:32.769 "name": "Nvme1", 00:40:32.769 "trtype": "tcp", 00:40:32.769 "traddr": "10.0.0.2", 00:40:32.769 "adrfam": "ipv4", 00:40:32.769 "trsvcid": "4420", 00:40:32.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:32.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:32.769 "hdgst": false, 00:40:32.769 "ddgst": false 00:40:32.769 }, 00:40:32.769 "method": "bdev_nvme_attach_controller" 00:40:32.769 }' 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:32.769 "params": { 00:40:32.769 "name": "Nvme1", 00:40:32.769 "trtype": "tcp", 00:40:32.769 "traddr": "10.0.0.2", 00:40:32.769 "adrfam": "ipv4", 00:40:32.769 "trsvcid": "4420", 00:40:32.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:32.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:32.769 "hdgst": false, 00:40:32.769 "ddgst": false 00:40:32.769 }, 00:40:32.769 "method": "bdev_nvme_attach_controller" 00:40:32.769 }' 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:32.769 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:32.769 "params": { 00:40:32.769 "name": "Nvme1", 00:40:32.769 "trtype": "tcp", 00:40:32.769 "traddr": "10.0.0.2", 00:40:32.769 "adrfam": "ipv4", 00:40:32.769 "trsvcid": "4420", 00:40:32.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:32.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:32.769 "hdgst": false, 00:40:32.769 "ddgst": false 00:40:32.769 }, 00:40:32.769 "method": "bdev_nvme_attach_controller" 00:40:32.769 }' 00:40:32.769 [2024-11-18 18:47:31.091928] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:32.769 [2024-11-18 18:47:31.091928] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:32.769 [2024-11-18 18:47:31.092082] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 18:47:31.092082] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:32.769 --proc-type=auto ] 00:40:32.769 [2024-11-18 18:47:31.093517] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:32.769 [2024-11-18 18:47:31.093517] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:32.769 [2024-11-18 18:47:31.093671] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:32.769 [2024-11-18 18:47:31.093675] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:33.028 [2024-11-18 18:47:31.340470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.285 [2024-11-18 18:47:31.448035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.285 [2024-11-18 18:47:31.463784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:33.285 [2024-11-18 18:47:31.518382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.285 [2024-11-18 18:47:31.568983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:33.285 [2024-11-18 18:47:31.587893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.543 [2024-11-18 18:47:31.634595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:33.543 [2024-11-18 18:47:31.706248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:33.801 Running I/O for 1 seconds... 00:40:33.801 Running I/O for 1 seconds... 00:40:33.801 Running I/O for 1 seconds... 00:40:33.801 Running I/O for 1 seconds... 00:40:34.734 9025.00 IOPS, 35.25 MiB/s 00:40:34.734 Latency(us) 00:40:34.734 [2024-11-18T17:47:33.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.734 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:34.734 Nvme1n1 : 1.01 9086.39 35.49 0.00 0.00 14022.77 6796.33 20194.80 00:40:34.734 [2024-11-18T17:47:33.071Z] =================================================================================================================== 00:40:34.734 [2024-11-18T17:47:33.071Z] Total : 9086.39 35.49 0.00 0.00 14022.77 6796.33 20194.80 00:40:34.734 4015.00 IOPS, 15.68 MiB/s 00:40:34.734 Latency(us) 00:40:34.734 [2024-11-18T17:47:33.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.734 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:34.734 Nvme1n1 : 1.03 4035.73 15.76 0.00 0.00 31209.53 5946.79 49516.09 00:40:34.734 [2024-11-18T17:47:33.071Z] =================================================================================================================== 00:40:34.734 [2024-11-18T17:47:33.071Z] Total : 4035.73 15.76 0.00 0.00 31209.53 5946.79 49516.09 00:40:34.734 143144.00 IOPS, 559.16 MiB/s 00:40:34.734 Latency(us) 00:40:34.734 [2024-11-18T17:47:33.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.734 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:34.734 Nvme1n1 : 1.00 142848.73 558.00 0.00 0.00 891.49 380.78 2026.76 00:40:34.734 [2024-11-18T17:47:33.071Z] =================================================================================================================== 00:40:34.734 [2024-11-18T17:47:33.071Z] Total : 142848.73 558.00 0.00 0.00 891.49 380.78 2026.76 00:40:34.992 3986.00 IOPS, 15.57 MiB/s 00:40:34.992 Latency(us) 00:40:34.992 [2024-11-18T17:47:33.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.992 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:34.992 Nvme1n1 : 1.01 4094.92 16.00 0.00 0.00 31120.46 6310.87 57477.50 00:40:34.992 [2024-11-18T17:47:33.329Z] =================================================================================================================== 00:40:34.992 [2024-11-18T17:47:33.329Z] Total : 4094.92 16.00 0.00 0.00 31120.46 6310.87 57477.50 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3171591 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3171593 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3171596 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:35.559 rmmod nvme_tcp 00:40:35.559 rmmod nvme_fabrics 00:40:35.559 rmmod nvme_keyring 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3171427 ']' 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3171427 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3171427 ']' 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3171427 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171427 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171427' 00:40:35.559 killing process with pid 3171427 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3171427 00:40:35.559 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3171427 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:36.933 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:38.833 00:40:38.833 real 0m9.899s 00:40:38.833 user 0m21.834s 00:40:38.833 sys 0m4.711s 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:38.833 ************************************ 00:40:38.833 END TEST nvmf_bdev_io_wait 00:40:38.833 ************************************ 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:38.833 ************************************ 00:40:38.833 START TEST nvmf_queue_depth 00:40:38.833 ************************************ 00:40:38.833 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:38.833 * Looking for test storage... 00:40:38.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:38.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.833 --rc genhtml_branch_coverage=1 00:40:38.833 --rc genhtml_function_coverage=1 00:40:38.833 --rc genhtml_legend=1 00:40:38.833 --rc geninfo_all_blocks=1 00:40:38.833 --rc geninfo_unexecuted_blocks=1 00:40:38.833 00:40:38.833 ' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:38.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.833 --rc genhtml_branch_coverage=1 00:40:38.833 --rc genhtml_function_coverage=1 00:40:38.833 --rc genhtml_legend=1 00:40:38.833 --rc geninfo_all_blocks=1 00:40:38.833 --rc geninfo_unexecuted_blocks=1 00:40:38.833 00:40:38.833 ' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:38.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.833 --rc genhtml_branch_coverage=1 00:40:38.833 --rc genhtml_function_coverage=1 00:40:38.833 --rc genhtml_legend=1 00:40:38.833 --rc geninfo_all_blocks=1 00:40:38.833 --rc geninfo_unexecuted_blocks=1 00:40:38.833 00:40:38.833 ' 00:40:38.833 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:38.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:38.833 --rc genhtml_branch_coverage=1 00:40:38.833 --rc genhtml_function_coverage=1 00:40:38.833 --rc genhtml_legend=1 00:40:38.834 --rc geninfo_all_blocks=1 00:40:38.834 --rc geninfo_unexecuted_blocks=1 00:40:38.834 00:40:38.834 ' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:38.834 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:41.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:41.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:41.364 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:41.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:41.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:41.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:41.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:40:41.365 00:40:41.365 --- 10.0.0.2 ping statistics --- 00:40:41.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.365 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:41.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:41.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:40:41.365 00:40:41.365 --- 10.0.0.1 ping statistics --- 00:40:41.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.365 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3174005 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3174005 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3174005 ']' 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:41.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:41.365 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:41.365 [2024-11-18 18:47:39.516777] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:41.365 [2024-11-18 18:47:39.519437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:41.365 [2024-11-18 18:47:39.519540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:41.365 [2024-11-18 18:47:39.667823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.624 [2024-11-18 18:47:39.802062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:41.624 [2024-11-18 18:47:39.802153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:41.624 [2024-11-18 18:47:39.802188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:41.624 [2024-11-18 18:47:39.802211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:41.624 [2024-11-18 18:47:39.802236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:41.624 [2024-11-18 18:47:39.803856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.882 [2024-11-18 18:47:40.179039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:41.882 [2024-11-18 18:47:40.179515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:42.447 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:42.447 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:42.447 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:42.447 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 [2024-11-18 18:47:40.508998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 Malloc0 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 [2024-11-18 18:47:40.629137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3174217 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3174217 /var/tmp/bdevperf.sock 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3174217 ']' 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:42.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:42.448 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.448 [2024-11-18 18:47:40.715387] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:42.448 [2024-11-18 18:47:40.715525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174217 ] 00:40:42.706 [2024-11-18 18:47:40.861183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.706 [2024-11-18 18:47:40.996904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.640 NVMe0n1 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.640 18:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:43.900 Running I/O for 10 seconds... 00:40:45.772 5794.00 IOPS, 22.63 MiB/s [2024-11-18T17:47:45.045Z] 6015.50 IOPS, 23.50 MiB/s [2024-11-18T17:47:46.420Z] 6076.33 IOPS, 23.74 MiB/s [2024-11-18T17:47:47.357Z] 6086.75 IOPS, 23.78 MiB/s [2024-11-18T17:47:48.295Z] 6112.60 IOPS, 23.88 MiB/s [2024-11-18T17:47:49.231Z] 6130.50 IOPS, 23.95 MiB/s [2024-11-18T17:47:50.203Z] 6128.29 IOPS, 23.94 MiB/s [2024-11-18T17:47:51.222Z] 6134.25 IOPS, 23.96 MiB/s [2024-11-18T17:47:52.158Z] 6127.44 IOPS, 23.94 MiB/s [2024-11-18T17:47:52.416Z] 6124.10 IOPS, 23.92 MiB/s 00:40:54.079 Latency(us) 00:40:54.079 [2024-11-18T17:47:52.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.079 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:54.079 Verification LBA range: start 0x0 length 0x4000 00:40:54.079 NVMe0n1 : 10.14 6134.28 23.96 0.00 0.00 165856.76 27185.30 97090.37 00:40:54.079 [2024-11-18T17:47:52.416Z] =================================================================================================================== 00:40:54.079 [2024-11-18T17:47:52.416Z] Total : 6134.28 23.96 0.00 0.00 165856.76 27185.30 97090.37 00:40:54.079 { 00:40:54.079 "results": [ 00:40:54.079 { 00:40:54.079 "job": "NVMe0n1", 00:40:54.079 "core_mask": "0x1", 00:40:54.079 "workload": "verify", 00:40:54.079 "status": "finished", 00:40:54.079 "verify_range": { 00:40:54.079 "start": 0, 00:40:54.079 "length": 16384 00:40:54.079 }, 00:40:54.079 "queue_depth": 1024, 00:40:54.079 "io_size": 4096, 00:40:54.079 "runtime": 10.140885, 00:40:54.079 "iops": 6134.277235172275, 00:40:54.080 "mibps": 23.9620204498917, 00:40:54.080 "io_failed": 0, 00:40:54.080 "io_timeout": 0, 00:40:54.080 "avg_latency_us": 165856.75757733587, 00:40:54.080 "min_latency_us": 27185.303703703703, 00:40:54.080 "max_latency_us": 97090.37037037036 00:40:54.080 } 00:40:54.080 ], 00:40:54.080 "core_count": 1 00:40:54.080 } 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3174217 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3174217 ']' 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3174217 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3174217 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3174217' 00:40:54.080 killing process with pid 3174217 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3174217 00:40:54.080 Received shutdown signal, test time was about 10.000000 seconds 00:40:54.080 00:40:54.080 Latency(us) 00:40:54.080 [2024-11-18T17:47:52.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.080 [2024-11-18T17:47:52.417Z] =================================================================================================================== 00:40:54.080 [2024-11-18T17:47:52.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:54.080 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3174217 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:55.013 rmmod nvme_tcp 00:40:55.013 rmmod nvme_fabrics 00:40:55.013 rmmod nvme_keyring 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3174005 ']' 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3174005 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3174005 ']' 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3174005 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3174005 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3174005' 00:40:55.013 killing process with pid 3174005 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3174005 00:40:55.013 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3174005 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:56.388 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.287 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:58.287 00:40:58.287 real 0m19.628s 00:40:58.287 user 0m27.026s 00:40:58.287 sys 0m3.799s 00:40:58.287 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.287 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:58.287 ************************************ 00:40:58.287 END TEST nvmf_queue_depth 00:40:58.287 ************************************ 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:58.546 ************************************ 00:40:58.546 START TEST nvmf_target_multipath 00:40:58.546 ************************************ 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:58.546 * Looking for test storage... 00:40:58.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:58.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.546 --rc genhtml_branch_coverage=1 00:40:58.546 --rc genhtml_function_coverage=1 00:40:58.546 --rc genhtml_legend=1 00:40:58.546 --rc geninfo_all_blocks=1 00:40:58.546 --rc geninfo_unexecuted_blocks=1 00:40:58.546 00:40:58.546 ' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:58.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.546 --rc genhtml_branch_coverage=1 00:40:58.546 --rc genhtml_function_coverage=1 00:40:58.546 --rc genhtml_legend=1 00:40:58.546 --rc geninfo_all_blocks=1 00:40:58.546 --rc geninfo_unexecuted_blocks=1 00:40:58.546 00:40:58.546 ' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:58.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.546 --rc genhtml_branch_coverage=1 00:40:58.546 --rc genhtml_function_coverage=1 00:40:58.546 --rc genhtml_legend=1 00:40:58.546 --rc geninfo_all_blocks=1 00:40:58.546 --rc geninfo_unexecuted_blocks=1 00:40:58.546 00:40:58.546 ' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:58.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.546 --rc genhtml_branch_coverage=1 00:40:58.546 --rc genhtml_function_coverage=1 00:40:58.546 --rc genhtml_legend=1 00:40:58.546 --rc geninfo_all_blocks=1 00:40:58.546 --rc geninfo_unexecuted_blocks=1 00:40:58.546 00:40:58.546 ' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:58.546 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:58.547 18:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:00.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:00.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.446 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:00.447 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:00.447 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:00.447 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:00.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:00.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:41:00.706 00:41:00.706 --- 10.0.0.2 ping statistics --- 00:41:00.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.706 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:00.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:00.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:41:00.706 00:41:00.706 --- 10.0.0.1 ping statistics --- 00:41:00.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.706 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:00.706 only one NIC for nvmf test 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.706 rmmod nvme_tcp 00:41:00.706 rmmod nvme_fabrics 00:41:00.706 rmmod nvme_keyring 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.706 18:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:03.238 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.238 00:41:03.238 real 0m4.361s 00:41:03.238 user 0m0.834s 00:41:03.238 sys 0m1.522s 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:03.238 ************************************ 00:41:03.238 END TEST nvmf_target_multipath 00:41:03.238 ************************************ 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:03.238 ************************************ 00:41:03.238 START TEST nvmf_zcopy 00:41:03.238 ************************************ 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:03.238 * Looking for test storage... 00:41:03.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.238 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:03.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.239 --rc genhtml_branch_coverage=1 00:41:03.239 --rc genhtml_function_coverage=1 00:41:03.239 --rc genhtml_legend=1 00:41:03.239 --rc geninfo_all_blocks=1 00:41:03.239 --rc geninfo_unexecuted_blocks=1 00:41:03.239 00:41:03.239 ' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:03.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.239 --rc genhtml_branch_coverage=1 00:41:03.239 --rc genhtml_function_coverage=1 00:41:03.239 --rc genhtml_legend=1 00:41:03.239 --rc geninfo_all_blocks=1 00:41:03.239 --rc geninfo_unexecuted_blocks=1 00:41:03.239 00:41:03.239 ' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:03.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.239 --rc genhtml_branch_coverage=1 00:41:03.239 --rc genhtml_function_coverage=1 00:41:03.239 --rc genhtml_legend=1 00:41:03.239 --rc geninfo_all_blocks=1 00:41:03.239 --rc geninfo_unexecuted_blocks=1 00:41:03.239 00:41:03.239 ' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:03.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.239 --rc genhtml_branch_coverage=1 00:41:03.239 --rc genhtml_function_coverage=1 00:41:03.239 --rc genhtml_legend=1 00:41:03.239 --rc geninfo_all_blocks=1 00:41:03.239 --rc geninfo_unexecuted_blocks=1 00:41:03.239 00:41:03.239 ' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.239 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:03.240 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.240 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:03.240 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:03.240 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:03.240 18:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:05.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:05.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:05.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:05.141 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:05.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:05.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:05.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:41:05.142 00:41:05.142 --- 10.0.0.2 ping statistics --- 00:41:05.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:05.142 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:05.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:05.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:41:05.142 00:41:05.142 --- 10.0.0.1 ping statistics --- 00:41:05.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:05.142 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3179765 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3179765 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3179765 ']' 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:05.142 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:05.400 [2024-11-18 18:48:03.512003] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:05.401 [2024-11-18 18:48:03.514532] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:05.401 [2024-11-18 18:48:03.514673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:05.401 [2024-11-18 18:48:03.657496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.659 [2024-11-18 18:48:03.778027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:05.659 [2024-11-18 18:48:03.778107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:05.659 [2024-11-18 18:48:03.778133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:05.659 [2024-11-18 18:48:03.778152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:05.659 [2024-11-18 18:48:03.778171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:05.659 [2024-11-18 18:48:03.779907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:05.917 [2024-11-18 18:48:04.102168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:05.917 [2024-11-18 18:48:04.102619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.175 [2024-11-18 18:48:04.488916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.175 [2024-11-18 18:48:04.505175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.175 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.434 malloc0 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:06.434 { 00:41:06.434 "params": { 00:41:06.434 "name": "Nvme$subsystem", 00:41:06.434 "trtype": "$TEST_TRANSPORT", 00:41:06.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.434 "adrfam": "ipv4", 00:41:06.434 "trsvcid": "$NVMF_PORT", 00:41:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.434 "hdgst": ${hdgst:-false}, 00:41:06.434 "ddgst": ${ddgst:-false} 00:41:06.434 }, 00:41:06.434 "method": "bdev_nvme_attach_controller" 00:41:06.434 } 00:41:06.434 EOF 00:41:06.434 )") 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:06.434 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:06.434 "params": { 00:41:06.434 "name": "Nvme1", 00:41:06.434 "trtype": "tcp", 00:41:06.434 "traddr": "10.0.0.2", 00:41:06.434 "adrfam": "ipv4", 00:41:06.434 "trsvcid": "4420", 00:41:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:06.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:06.434 "hdgst": false, 00:41:06.434 "ddgst": false 00:41:06.434 }, 00:41:06.434 "method": "bdev_nvme_attach_controller" 00:41:06.434 }' 00:41:06.434 [2024-11-18 18:48:04.650864] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:06.434 [2024-11-18 18:48:04.650991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179922 ] 00:41:06.693 [2024-11-18 18:48:04.792305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.693 [2024-11-18 18:48:04.927147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.260 Running I/O for 10 seconds... 00:41:09.571 3955.00 IOPS, 30.90 MiB/s [2024-11-18T17:48:08.844Z] 3986.50 IOPS, 31.14 MiB/s [2024-11-18T17:48:09.796Z] 3995.33 IOPS, 31.21 MiB/s [2024-11-18T17:48:10.730Z] 4005.25 IOPS, 31.29 MiB/s [2024-11-18T17:48:11.665Z] 4014.40 IOPS, 31.36 MiB/s [2024-11-18T17:48:12.600Z] 4048.83 IOPS, 31.63 MiB/s [2024-11-18T17:48:13.976Z] 4080.57 IOPS, 31.88 MiB/s [2024-11-18T17:48:14.912Z] 4077.38 IOPS, 31.85 MiB/s [2024-11-18T17:48:15.847Z] 4074.67 IOPS, 31.83 MiB/s [2024-11-18T17:48:15.847Z] 4074.50 IOPS, 31.83 MiB/s 00:41:17.510 Latency(us) 00:41:17.510 [2024-11-18T17:48:15.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.510 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:17.510 Verification LBA range: start 0x0 length 0x1000 00:41:17.510 Nvme1n1 : 10.02 4078.56 31.86 0.00 0.00 31297.70 3252.53 42525.58 00:41:17.510 [2024-11-18T17:48:15.847Z] =================================================================================================================== 00:41:17.510 [2024-11-18T17:48:15.847Z] Total : 4078.56 31.86 0.00 0.00 31297.70 3252.53 42525.58 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3181734 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:18.447 { 00:41:18.447 "params": { 00:41:18.447 "name": "Nvme$subsystem", 00:41:18.447 "trtype": "$TEST_TRANSPORT", 00:41:18.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.447 "adrfam": "ipv4", 00:41:18.447 "trsvcid": "$NVMF_PORT", 00:41:18.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.447 "hdgst": ${hdgst:-false}, 00:41:18.447 "ddgst": ${ddgst:-false} 00:41:18.447 }, 00:41:18.447 "method": "bdev_nvme_attach_controller" 00:41:18.447 } 00:41:18.447 EOF 00:41:18.447 )") 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:18.447 [2024-11-18 18:48:16.432808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.432861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:18.447 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:18.447 "params": { 00:41:18.447 "name": "Nvme1", 00:41:18.447 "trtype": "tcp", 00:41:18.447 "traddr": "10.0.0.2", 00:41:18.447 "adrfam": "ipv4", 00:41:18.447 "trsvcid": "4420", 00:41:18.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:18.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:18.447 "hdgst": false, 00:41:18.447 "ddgst": false 00:41:18.447 }, 00:41:18.447 "method": "bdev_nvme_attach_controller" 00:41:18.447 }' 00:41:18.447 [2024-11-18 18:48:16.440724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.440759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.448679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.448710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.456696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.456734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.464714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.464745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.472676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.472707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.480697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.480727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.488694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.488725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.496678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.496708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.504707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.504738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.510094] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:18.447 [2024-11-18 18:48:16.510224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181734 ] 00:41:18.447 [2024-11-18 18:48:16.512678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.512711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.520713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.520744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.528698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.528729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.536695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.536729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.544720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.544755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.552705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.552739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.560713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.560746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.568718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.568751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.576696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.576729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.584703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.584734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.592717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.592756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.600692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.600723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.608718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.608750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.616699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.616731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.624693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.447 [2024-11-18 18:48:16.624724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.447 [2024-11-18 18:48:16.632706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.632737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.640699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.640731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.648675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.448 [2024-11-18 18:48:16.648702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.648748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.656721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.656753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.664739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.664785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.672792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.672840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.680706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.680738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.688679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.688711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.696726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.696759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.704701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.704734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.712749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.712782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.720709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.720742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.728683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.728716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.736724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.736757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.744718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.744751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.752702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.752734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.760703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.760735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.768711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.768744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.776703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.448 [2024-11-18 18:48:16.776735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.448 [2024-11-18 18:48:16.778387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.707 [2024-11-18 18:48:16.784703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.784734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.792708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.792748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.800776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.800829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.808744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.808785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.816684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.816715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.824707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.824738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.832690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.832722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.840700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.840732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.848718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.848750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.856682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.856714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.864789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.864841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.872778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.872832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.880756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.880804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.888799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.888863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.896684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.896717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.904712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.904743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.912709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.912740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.920684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.920715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.928703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.928735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.936703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.936745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.944710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.944742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.952723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.952756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.960697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.960729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.968705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.968738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.976728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.976761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.984678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.984708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:16.992704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:16.992735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.000705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.000737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.008773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.008827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.016794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.016846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.024762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.024814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.032719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.032752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.707 [2024-11-18 18:48:17.040722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.707 [2024-11-18 18:48:17.040754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.048692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.048724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.056718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.056751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.064702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.064734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.072685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.072716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.080700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.080732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.088686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.088717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.096706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.096739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.104714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.104747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.112686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.112718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.120709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.120744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.128731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.128764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.136666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.136697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.144709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.144741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.152695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.152727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.160704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.160736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.168700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.168732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.176725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.176757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.184719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.184757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.192797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.192831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.200718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.200751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.208714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.208750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.216714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.216746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.224731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.224764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.232718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.232749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.240670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.240701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.248707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.248741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.256719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.256750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.264684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.264732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.272727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.272759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.280707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.280760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.288772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.288805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.966 [2024-11-18 18:48:17.296723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.966 [2024-11-18 18:48:17.296755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.304696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.304728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.312727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.312759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.320752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.320784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.328707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.328738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.336720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.336751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.344711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.344742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.352732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.352763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.360721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.360752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 Running I/O for 5 seconds... 00:41:19.225 [2024-11-18 18:48:17.389134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.389172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.402527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.402560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.419843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.419876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.434720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.434755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.451781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.451814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.468047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.468088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.483724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.483757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.498940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.498981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.515057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.515098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.531063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.531103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.225 [2024-11-18 18:48:17.546449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.225 [2024-11-18 18:48:17.546491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.561996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.562038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.577905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.577945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.593389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.593430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.609260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.609296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.624758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.624800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.640570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.640620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.656270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.656311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.671929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.671978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.687740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.687773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.703221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.703254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.719124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.719158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.734259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.734292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.749878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.749926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.765470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.765511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.780822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.780856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.796622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.796684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.484 [2024-11-18 18:48:17.811774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.484 [2024-11-18 18:48:17.811807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.826023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.826064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.843121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.843156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.857563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.857604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.873368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.873408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.888852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.888886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.904118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.904159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.920066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.920117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.935262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.935302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.949470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.949510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.964570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.964622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.980108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.980141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:17.996362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:17.996402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:18.011956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:18.011996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:18.027505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:18.027545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:18.043543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:18.043583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:18.059294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:18.059329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.742 [2024-11-18 18:48:18.074884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.742 [2024-11-18 18:48:18.074920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.089718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.089752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.104145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.104185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.120137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.120177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.135753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.135793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.151171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.151211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.167053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.167094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.182741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.182775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.198789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.198825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.214819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.214863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.230482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.230523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.246785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.246819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.261303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.261338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.275717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.275750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.292140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.292182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.308374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.308415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.001 [2024-11-18 18:48:18.324289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.001 [2024-11-18 18:48:18.324330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.339583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.339643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.355690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.355725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 8007.00 IOPS, 62.55 MiB/s [2024-11-18T17:48:18.597Z] [2024-11-18 18:48:18.371850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.371899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.387696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.387728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.402907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.402940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.418405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.418438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.433575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.433626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.448426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.448460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.463971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.464004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.479712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.479748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.495119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.495160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.510411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.510451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.525640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.525680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.541448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.541481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.556797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.556831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.572164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.572206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.260 [2024-11-18 18:48:18.587368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.260 [2024-11-18 18:48:18.587407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.603310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.603363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.618764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.618806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.634977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.635017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.650279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.650312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.665698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.665738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.681177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.681217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.696480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.696522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.712041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.712075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.727826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.727859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.743700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.743734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.759547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.759587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.774399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.774433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.790713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.790748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.805846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.805886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.821479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.821519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.836790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.836823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.852050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.852091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.554 [2024-11-18 18:48:18.866825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.554 [2024-11-18 18:48:18.866862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.882484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.882526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.898654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.898690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.914925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.914966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.930461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.930495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.945784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.945817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.961569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.961629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.830 [2024-11-18 18:48:18.976904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.830 [2024-11-18 18:48:18.976938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:18.992306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:18.992339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.007918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.007959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.023844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.023880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.039547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.039581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.054357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.054405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.070013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.070053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.085398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.085438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.101427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.101468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.117221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.117254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.133127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.133168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.149079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.149119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.831 [2024-11-18 18:48:19.164365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.831 [2024-11-18 18:48:19.164399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.179221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.179255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.195553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.195593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.211407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.211441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.226840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.226873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.241740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.241774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.256779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.256812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.271846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.271879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.286571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.286622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.302160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.092 [2024-11-18 18:48:19.302192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.092 [2024-11-18 18:48:19.317410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.317449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 [2024-11-18 18:48:19.332697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.332732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 [2024-11-18 18:48:19.348661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.348711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 [2024-11-18 18:48:19.365284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.365324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 8085.50 IOPS, 63.17 MiB/s [2024-11-18T17:48:19.430Z] [2024-11-18 18:48:19.381644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.381686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 [2024-11-18 18:48:19.397373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.397406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.093 [2024-11-18 18:48:19.413774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.093 [2024-11-18 18:48:19.413809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.429587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.429665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.446258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.446298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.462352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.462385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.478185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.478217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.492996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.493036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.507907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.507940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.522039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.522079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.537926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.537978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.551830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.551862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.568051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.568100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.581073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.581109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.597256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.597296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.612859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.612894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.629625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.629675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.645252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.645292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.660635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.660687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.350 [2024-11-18 18:48:19.676226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.350 [2024-11-18 18:48:19.676277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.608 [2024-11-18 18:48:19.690461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.608 [2024-11-18 18:48:19.690502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.608 [2024-11-18 18:48:19.705745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.608 [2024-11-18 18:48:19.705779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.608 [2024-11-18 18:48:19.721627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.721679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.736939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.736972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.751802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.751836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.767955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.767996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.783843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.783876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.799451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.799492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.814724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.814756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.830394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.830427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.845237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.845270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.862132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.862172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.877751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.877784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.893812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.893846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.909728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.909762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.925814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.925875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.609 [2024-11-18 18:48:19.941405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.609 [2024-11-18 18:48:19.941446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:19.956841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:19.956874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:19.972309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:19.972353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:19.988150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:19.988184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.003777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.003815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.020747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.020796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.037555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.037601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.053069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.053112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.069353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.069388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.086013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.086050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.102643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.102699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.117719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.117756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.134061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.134101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.149183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.149216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.168064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.168105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.181932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.181986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.867 [2024-11-18 18:48:20.198766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.867 [2024-11-18 18:48:20.198801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.214632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.214672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.230661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.230694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.246952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.246985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.262089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.262122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.277062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.277113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.292247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.292288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.307691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.307723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.323251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.323292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.338871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.338921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.354380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.354420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 [2024-11-18 18:48:20.369593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.369647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.126 8099.67 IOPS, 63.28 MiB/s [2024-11-18T17:48:20.463Z] [2024-11-18 18:48:20.384222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.126 [2024-11-18 18:48:20.384261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.127 [2024-11-18 18:48:20.400333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.127 [2024-11-18 18:48:20.400374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.127 [2024-11-18 18:48:20.415708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.127 [2024-11-18 18:48:20.415744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.127 [2024-11-18 18:48:20.430797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.127 [2024-11-18 18:48:20.430830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.127 [2024-11-18 18:48:20.446363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.127 [2024-11-18 18:48:20.446403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.127 [2024-11-18 18:48:20.461578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.127 [2024-11-18 18:48:20.461631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.477650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.477702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.492657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.492706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.509284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.509319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.524272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.524305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.540796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.540830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.556361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.556394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.572123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.572164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.586955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.586990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.602843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.602893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.618421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.618461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.633840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.633872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.648974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.649006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.664799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.664833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.679793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.679829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.695109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.695149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.386 [2024-11-18 18:48:20.710934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.386 [2024-11-18 18:48:20.710968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.726882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.726933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.742685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.742719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.758088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.758147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.773796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.773828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.789183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.789223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.804701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.804733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.820206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.820240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.834851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.834902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.849765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.849798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.865903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.865945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.880997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.881037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.895044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.895084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.911082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.911123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.925153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.925193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.940335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.940368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.955967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.956007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.644 [2024-11-18 18:48:20.971267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.644 [2024-11-18 18:48:20.971307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.902 [2024-11-18 18:48:20.986873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.902 [2024-11-18 18:48:20.986928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.902 [2024-11-18 18:48:21.002514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.002554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.017832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.017865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.032888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.032927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.047811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.047844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.063603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.063653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.079587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.079639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.095538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.095571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.110873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.110907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.125465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.125505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.141396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.141440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.156578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.156633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.172648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.172700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.188685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.188718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.204415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.204449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.219846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.219880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.903 [2024-11-18 18:48:21.235661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.903 [2024-11-18 18:48:21.235697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.251666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.251699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.267318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.267351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.282489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.282529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.297759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.297794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.314510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.314551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.329905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.329938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.345137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.345170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.360767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.360801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 8133.50 IOPS, 63.54 MiB/s [2024-11-18T17:48:21.498Z] [2024-11-18 18:48:21.376173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.376207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.391451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.391490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.406814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.406847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.422586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.422640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.438209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.438257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.452955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.452989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.467587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.467637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.161 [2024-11-18 18:48:21.483367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.161 [2024-11-18 18:48:21.483408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.499261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.499301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.514687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.514721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.530227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.530267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.546448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.546489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.561790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.561823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.576628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.576669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.592955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.592990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.608745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.608784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.625248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.625289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.640567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.640627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.656932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.656983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.672500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.672543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.687984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.688025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.704164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.704204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.720156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.720206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.736367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.736419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.420 [2024-11-18 18:48:21.751947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.420 [2024-11-18 18:48:21.751999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.767743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.767776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.783546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.783587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.799716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.799750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.815904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.815937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.831454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.831494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.847114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.847154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.863214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.863249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.879267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.879300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.894963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.894996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.911159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.911199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.927235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.927275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.943351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.943385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.959133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.959166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.974406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.974439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:21.990513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:21.990554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.679 [2024-11-18 18:48:22.006569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.679 [2024-11-18 18:48:22.006629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.022866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.022925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.039094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.039129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.055604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.055657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.070937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.070977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.086929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.086969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.937 [2024-11-18 18:48:22.102772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.937 [2024-11-18 18:48:22.102807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.118919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.118952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.134581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.134632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.150146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.150178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.165757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.165789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.181319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.181358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.197254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.197287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.213175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.213215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.229029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.229069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.245250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.245291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.938 [2024-11-18 18:48:22.260680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.938 [2024-11-18 18:48:22.260714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.276340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.276376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.291872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.291905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.306578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.306631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.323348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.323381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.338569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.338627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.353427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.353468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.369458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.369492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 8122.20 IOPS, 63.45 MiB/s [2024-11-18T17:48:22.534Z] [2024-11-18 18:48:22.384299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.384338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.392736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.392768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 00:41:24.197 Latency(us) 00:41:24.197 [2024-11-18T17:48:22.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:24.197 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:24.197 Nvme1n1 : 5.02 8122.94 63.46 0.00 0.00 15729.58 6505.05 34564.17 00:41:24.197 [2024-11-18T17:48:22.534Z] =================================================================================================================== 00:41:24.197 [2024-11-18T17:48:22.534Z] Total : 8122.94 63.46 0.00 0.00 15729.58 6505.05 34564.17 00:41:24.197 [2024-11-18 18:48:22.400742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.400774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.408712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.408743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.416746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.416777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.424706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.424737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.432723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.432758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.440716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.440761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.448862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.448928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.456902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.456970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.464735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.464766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.472698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.472728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.480727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.480768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.488702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.488732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.496723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.496754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.504718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.504748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.512702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.512733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.520759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.520790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.197 [2024-11-18 18:48:22.528703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.197 [2024-11-18 18:48:22.528735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.536742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.536793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.544853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.544916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.552804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.552865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.560769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.560801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.568725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.568757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.576704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.576737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.584722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.584754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.592729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.592761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.600699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.600730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.608728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.608760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.616709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.616742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.624728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.624761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.632749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.632791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.640734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.640771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.648724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.648755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.656724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.656754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.664702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.664732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.672720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.672751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.680696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.680726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.688720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.688751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.696730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.696761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.704773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.704822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.712838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.712903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.720728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.720759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.728729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.728760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.736741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.736772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.744710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.744741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.752725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.752757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.760721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.456 [2024-11-18 18:48:22.760752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.456 [2024-11-18 18:48:22.768776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.457 [2024-11-18 18:48:22.768835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.457 [2024-11-18 18:48:22.776847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.457 [2024-11-18 18:48:22.776912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.457 [2024-11-18 18:48:22.784851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.457 [2024-11-18 18:48:22.784937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.792710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.792744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.800731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.800762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.816700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.816731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.824724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.824756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.832722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.832753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.840722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.840755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.848731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.848762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.856733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.856763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.864699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.715 [2024-11-18 18:48:22.864730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.715 [2024-11-18 18:48:22.872731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.872764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.880708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.880740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.888726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.888757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.896723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.896754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.904699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.904730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.912735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.912765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.920718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.920749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.928713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.928743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.936719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.936750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.944739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.944799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.952803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.952862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.960734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.960765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.968701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.968732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.976719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.976749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.984694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.984724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:22.992702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:22.992733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.000723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.000754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.008704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.008735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.016718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.016749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.024726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.024758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.032719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.032750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.040723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.040755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.716 [2024-11-18 18:48:23.048727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.716 [2024-11-18 18:48:23.048758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.056704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.056735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.064739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.064780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.072782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.072833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.080751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.080783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.088726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.088757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.096702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.096742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.104726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.104757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.112732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.112762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.120697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.120727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.128748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.128779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.136703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.136750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.144728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.144759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.152732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.152763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.160758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.160817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.168748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.168783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.176720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.176750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.184697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.184727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.192722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.192753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.200699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.200730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.208723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.208753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.216724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.216755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.224720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.224750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.232720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.232752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.240720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.240751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.248702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.248732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.256726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.256758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.264702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.264733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.272727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.272758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.280723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.280753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 [2024-11-18 18:48:23.288705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.975 [2024-11-18 18:48:23.288736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3181734) - No such process 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3181734 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.975 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:25.234 delay0 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.234 18:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:25.234 [2024-11-18 18:48:23.427842] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:33.343 Initializing NVMe Controllers 00:41:33.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:33.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:33.343 Initialization complete. Launching workers. 00:41:33.343 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 223, failed: 17573 00:41:33.343 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17641, failed to submit 155 00:41:33.343 success 17588, unsuccessful 53, failed 0 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:33.343 rmmod nvme_tcp 00:41:33.343 rmmod nvme_fabrics 00:41:33.343 rmmod nvme_keyring 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3179765 ']' 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3179765 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3179765 ']' 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3179765 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179765 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179765' 00:41:33.343 killing process with pid 3179765 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3179765 00:41:33.343 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3179765 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:33.602 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:35.502 00:41:35.502 real 0m32.724s 00:41:35.502 user 0m46.873s 00:41:35.502 sys 0m10.258s 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.502 ************************************ 00:41:35.502 END TEST nvmf_zcopy 00:41:35.502 ************************************ 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:35.502 ************************************ 00:41:35.502 START TEST nvmf_nmic 00:41:35.502 ************************************ 00:41:35.502 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:35.760 * Looking for test storage... 00:41:35.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:35.760 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:35.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.761 --rc genhtml_branch_coverage=1 00:41:35.761 --rc genhtml_function_coverage=1 00:41:35.761 --rc genhtml_legend=1 00:41:35.761 --rc geninfo_all_blocks=1 00:41:35.761 --rc geninfo_unexecuted_blocks=1 00:41:35.761 00:41:35.761 ' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:35.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.761 --rc genhtml_branch_coverage=1 00:41:35.761 --rc genhtml_function_coverage=1 00:41:35.761 --rc genhtml_legend=1 00:41:35.761 --rc geninfo_all_blocks=1 00:41:35.761 --rc geninfo_unexecuted_blocks=1 00:41:35.761 00:41:35.761 ' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:35.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.761 --rc genhtml_branch_coverage=1 00:41:35.761 --rc genhtml_function_coverage=1 00:41:35.761 --rc genhtml_legend=1 00:41:35.761 --rc geninfo_all_blocks=1 00:41:35.761 --rc geninfo_unexecuted_blocks=1 00:41:35.761 00:41:35.761 ' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:35.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.761 --rc genhtml_branch_coverage=1 00:41:35.761 --rc genhtml_function_coverage=1 00:41:35.761 --rc genhtml_legend=1 00:41:35.761 --rc geninfo_all_blocks=1 00:41:35.761 --rc geninfo_unexecuted_blocks=1 00:41:35.761 00:41:35.761 ' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:35.761 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:35.762 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:37.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:37.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.661 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:37.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:37.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:37.662 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:37.920 18:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:37.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:37.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:41:37.920 00:41:37.920 --- 10.0.0.2 ping statistics --- 00:41:37.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.920 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:37.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:37.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:41:37.920 00:41:37.920 --- 10.0.0.1 ping statistics --- 00:41:37.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.920 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3185375 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3185375 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3185375 ']' 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:37.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:37.920 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:37.920 [2024-11-18 18:48:36.176290] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:37.920 [2024-11-18 18:48:36.178888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:37.920 [2024-11-18 18:48:36.179020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:38.180 [2024-11-18 18:48:36.322910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:38.180 [2024-11-18 18:48:36.446827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:38.181 [2024-11-18 18:48:36.446894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:38.181 [2024-11-18 18:48:36.446935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:38.181 [2024-11-18 18:48:36.446954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:38.181 [2024-11-18 18:48:36.446988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:38.181 [2024-11-18 18:48:36.449501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.181 [2024-11-18 18:48:36.449570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:38.181 [2024-11-18 18:48:36.449616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.181 [2024-11-18 18:48:36.449644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:38.747 [2024-11-18 18:48:36.816736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:38.747 [2024-11-18 18:48:36.825965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:38.747 [2024-11-18 18:48:36.826144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:38.747 [2024-11-18 18:48:36.826949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:38.747 [2024-11-18 18:48:36.827295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 [2024-11-18 18:48:37.146725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 Malloc0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 [2024-11-18 18:48:37.262933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:39.005 test case1: single bdev can't be used in multiple subsystems 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 [2024-11-18 18:48:37.286570] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:39.005 [2024-11-18 18:48:37.286657] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:39.005 [2024-11-18 18:48:37.286683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.005 request: 00:41:39.005 { 00:41:39.005 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:39.005 "namespace": { 00:41:39.005 "bdev_name": "Malloc0", 00:41:39.005 "no_auto_visible": false 00:41:39.005 }, 00:41:39.005 "method": "nvmf_subsystem_add_ns", 00:41:39.005 "req_id": 1 00:41:39.005 } 00:41:39.005 Got JSON-RPC error response 00:41:39.005 response: 00:41:39.005 { 00:41:39.005 "code": -32602, 00:41:39.005 "message": "Invalid parameters" 00:41:39.005 } 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:39.005 Adding namespace failed - expected result. 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:39.005 test case2: host connect to nvmf target in multiple paths 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.005 [2024-11-18 18:48:37.294730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.005 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:39.263 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:39.521 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:39.521 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:39.521 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:39.521 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:39.521 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:41.417 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:41.417 [global] 00:41:41.417 thread=1 00:41:41.417 invalidate=1 00:41:41.417 rw=write 00:41:41.417 time_based=1 00:41:41.417 runtime=1 00:41:41.417 ioengine=libaio 00:41:41.417 direct=1 00:41:41.417 bs=4096 00:41:41.417 iodepth=1 00:41:41.417 norandommap=0 00:41:41.417 numjobs=1 00:41:41.417 00:41:41.417 verify_dump=1 00:41:41.417 verify_backlog=512 00:41:41.417 verify_state_save=0 00:41:41.417 do_verify=1 00:41:41.417 verify=crc32c-intel 00:41:41.417 [job0] 00:41:41.417 filename=/dev/nvme0n1 00:41:41.674 Could not set queue depth (nvme0n1) 00:41:41.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:41.675 fio-3.35 00:41:41.675 Starting 1 thread 00:41:43.050 00:41:43.050 job0: (groupid=0, jobs=1): err= 0: pid=3185885: Mon Nov 18 18:48:41 2024 00:41:43.050 read: IOPS=1986, BW=7944KiB/s (8135kB/s)(7952KiB/1001msec) 00:41:43.050 slat (nsec): min=5224, max=51496, avg=8201.91, stdev=4234.31 00:41:43.050 clat (usec): min=252, max=396, avg=274.93, stdev=17.78 00:41:43.050 lat (usec): min=257, max=412, avg=283.13, stdev=21.11 00:41:43.050 clat percentiles (usec): 00:41:43.050 | 1.00th=[ 255], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 262], 00:41:43.050 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:41:43.050 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 314], 00:41:43.050 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 379], 99.95th=[ 396], 00:41:43.050 | 99.99th=[ 396] 00:41:43.050 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:43.050 slat (nsec): min=6513, max=56981, avg=10571.63, stdev=5935.07 00:41:43.050 clat (usec): min=175, max=459, avg=197.49, stdev=18.62 00:41:43.050 lat (usec): min=182, max=497, avg=208.06, stdev=22.37 00:41:43.050 clat percentiles (usec): 00:41:43.050 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 186], 00:41:43.050 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:41:43.050 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 231], 00:41:43.050 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 347], 00:41:43.050 | 99.99th=[ 461] 00:41:43.050 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:41:43.050 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:43.050 lat (usec) : 250=50.22%, 500=49.78% 00:41:43.050 cpu : usr=4.40%, sys=3.80%, ctx=4036, majf=0, minf=1 00:41:43.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:43.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.050 issued rwts: total=1988,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:43.050 00:41:43.050 Run status group 0 (all jobs): 00:41:43.050 READ: bw=7944KiB/s (8135kB/s), 7944KiB/s-7944KiB/s (8135kB/s-8135kB/s), io=7952KiB (8143kB), run=1001-1001msec 00:41:43.050 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:41:43.050 00:41:43.050 Disk stats (read/write): 00:41:43.050 nvme0n1: ios=1667/2048, merge=0/0, ticks=471/370, in_queue=841, util=91.78% 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:43.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:43.050 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:43.309 rmmod nvme_tcp 00:41:43.309 rmmod nvme_fabrics 00:41:43.309 rmmod nvme_keyring 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3185375 ']' 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3185375 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3185375 ']' 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3185375 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3185375 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3185375' 00:41:43.309 killing process with pid 3185375 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3185375 00:41:43.309 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3185375 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:44.684 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:46.582 00:41:46.582 real 0m11.038s 00:41:46.582 user 0m18.968s 00:41:46.582 sys 0m3.669s 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:46.582 ************************************ 00:41:46.582 END TEST nvmf_nmic 00:41:46.582 ************************************ 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:46.582 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:46.841 ************************************ 00:41:46.841 START TEST nvmf_fio_target 00:41:46.841 ************************************ 00:41:46.841 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:46.841 * Looking for test storage... 00:41:46.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:46.841 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:46.841 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:46.841 18:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:46.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.841 --rc genhtml_branch_coverage=1 00:41:46.841 --rc genhtml_function_coverage=1 00:41:46.841 --rc genhtml_legend=1 00:41:46.841 --rc geninfo_all_blocks=1 00:41:46.841 --rc geninfo_unexecuted_blocks=1 00:41:46.841 00:41:46.841 ' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:46.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.841 --rc genhtml_branch_coverage=1 00:41:46.841 --rc genhtml_function_coverage=1 00:41:46.841 --rc genhtml_legend=1 00:41:46.841 --rc geninfo_all_blocks=1 00:41:46.841 --rc geninfo_unexecuted_blocks=1 00:41:46.841 00:41:46.841 ' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:46.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.841 --rc genhtml_branch_coverage=1 00:41:46.841 --rc genhtml_function_coverage=1 00:41:46.841 --rc genhtml_legend=1 00:41:46.841 --rc geninfo_all_blocks=1 00:41:46.841 --rc geninfo_unexecuted_blocks=1 00:41:46.841 00:41:46.841 ' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:46.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.841 --rc genhtml_branch_coverage=1 00:41:46.841 --rc genhtml_function_coverage=1 00:41:46.841 --rc genhtml_legend=1 00:41:46.841 --rc geninfo_all_blocks=1 00:41:46.841 --rc geninfo_unexecuted_blocks=1 00:41:46.841 00:41:46.841 ' 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:46.841 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:46.842 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:48.743 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:48.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:48.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:48.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:48.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:48.744 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:48.744 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:49.002 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:49.002 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:49.002 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:49.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:49.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:41:49.003 00:41:49.003 --- 10.0.0.2 ping statistics --- 00:41:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.003 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:49.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:49.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:41:49.003 00:41:49.003 --- 10.0.0.1 ping statistics --- 00:41:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.003 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3188208 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3188208 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3188208 ']' 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.003 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:49.003 [2024-11-18 18:48:47.262693] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:49.003 [2024-11-18 18:48:47.265374] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:49.003 [2024-11-18 18:48:47.265478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:49.261 [2024-11-18 18:48:47.412304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:49.261 [2024-11-18 18:48:47.549842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:49.261 [2024-11-18 18:48:47.549911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:49.261 [2024-11-18 18:48:47.549941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:49.261 [2024-11-18 18:48:47.549963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:49.261 [2024-11-18 18:48:47.549985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:49.261 [2024-11-18 18:48:47.552799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:49.261 [2024-11-18 18:48:47.552871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:49.261 [2024-11-18 18:48:47.552924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.261 [2024-11-18 18:48:47.552936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:49.826 [2024-11-18 18:48:47.920480] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:49.826 [2024-11-18 18:48:47.933946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:49.826 [2024-11-18 18:48:47.934102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:49.826 [2024-11-18 18:48:47.934929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:49.826 [2024-11-18 18:48:47.935277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:50.084 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:50.342 [2024-11-18 18:48:48.510007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:50.342 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.600 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:50.600 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.165 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:51.165 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.422 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:51.422 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:51.680 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:51.680 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:51.999 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:52.282 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:52.282 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:52.848 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:52.848 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.107 18:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:53.107 18:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:53.365 18:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:53.624 18:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:53.624 18:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:53.882 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:53.882 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:54.139 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:54.398 [2024-11-18 18:48:52.690283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:54.398 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:54.655 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:54.913 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:55.171 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:57.698 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:57.698 [global] 00:41:57.698 thread=1 00:41:57.698 invalidate=1 00:41:57.698 rw=write 00:41:57.698 time_based=1 00:41:57.698 runtime=1 00:41:57.698 ioengine=libaio 00:41:57.698 direct=1 00:41:57.698 bs=4096 00:41:57.698 iodepth=1 00:41:57.698 norandommap=0 00:41:57.698 numjobs=1 00:41:57.698 00:41:57.698 verify_dump=1 00:41:57.698 verify_backlog=512 00:41:57.698 verify_state_save=0 00:41:57.698 do_verify=1 00:41:57.698 verify=crc32c-intel 00:41:57.698 [job0] 00:41:57.698 filename=/dev/nvme0n1 00:41:57.698 [job1] 00:41:57.698 filename=/dev/nvme0n2 00:41:57.698 [job2] 00:41:57.698 filename=/dev/nvme0n3 00:41:57.698 [job3] 00:41:57.698 filename=/dev/nvme0n4 00:41:57.698 Could not set queue depth (nvme0n1) 00:41:57.698 Could not set queue depth (nvme0n2) 00:41:57.698 Could not set queue depth (nvme0n3) 00:41:57.698 Could not set queue depth (nvme0n4) 00:41:57.698 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.698 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.698 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.698 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:57.698 fio-3.35 00:41:57.698 Starting 4 threads 00:41:58.629 00:41:58.629 job0: (groupid=0, jobs=1): err= 0: pid=3189291: Mon Nov 18 18:48:56 2024 00:41:58.629 read: IOPS=516, BW=2067KiB/s (2117kB/s)(2096KiB/1014msec) 00:41:58.629 slat (nsec): min=4110, max=19847, avg=6293.62, stdev=2851.03 00:41:58.629 clat (usec): min=211, max=42169, avg=1281.85, stdev=6356.43 00:41:58.629 lat (usec): min=216, max=42177, avg=1288.14, stdev=6357.70 00:41:58.629 clat percentiles (usec): 00:41:58.629 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:41:58.629 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:41:58.629 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 379], 00:41:58.629 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:41:58.629 | 99.99th=[42206] 00:41:58.629 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:41:58.629 slat (nsec): min=5366, max=36784, avg=9675.12, stdev=3375.43 00:41:58.629 clat (usec): min=180, max=615, avg=317.53, stdev=58.98 00:41:58.629 lat (usec): min=186, max=626, avg=327.20, stdev=59.91 00:41:58.629 clat percentiles (usec): 00:41:58.629 | 1.00th=[ 188], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 265], 00:41:58.629 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:41:58.629 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 392], 00:41:58.629 | 99.00th=[ 433], 99.50th=[ 469], 99.90th=[ 537], 99.95th=[ 619], 00:41:58.629 | 99.99th=[ 619] 00:41:58.629 bw ( KiB/s): min= 528, max= 7664, per=20.28%, avg=4096.00, stdev=5045.91, samples=2 00:41:58.629 iops : min= 132, max= 1916, avg=1024.00, stdev=1261.48, samples=2 00:41:58.629 lat (usec) : 250=21.06%, 500=77.71%, 750=0.32% 00:41:58.629 lat (msec) : 2=0.06%, 50=0.84% 00:41:58.630 cpu : usr=0.69%, sys=1.28%, ctx=1548, majf=0, minf=2 00:41:58.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.630 job1: (groupid=0, jobs=1): err= 0: pid=3189292: Mon Nov 18 18:48:56 2024 00:41:58.630 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:41:58.630 slat (nsec): min=8554, max=48788, avg=16695.71, stdev=7788.17 00:41:58.630 clat (usec): min=40880, max=41494, avg=41002.53, stdev=118.40 00:41:58.630 lat (usec): min=40922, max=41503, avg=41019.23, stdev=115.40 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:58.630 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:58.630 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:58.630 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:58.630 | 99.99th=[41681] 00:41:58.630 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:41:58.630 slat (nsec): min=7617, max=30490, avg=10519.37, stdev=3793.98 00:41:58.630 clat (usec): min=192, max=1699, avg=262.38, stdev=102.99 00:41:58.630 lat (usec): min=200, max=1709, avg=272.90, stdev=103.49 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:41:58.630 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:41:58.630 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 392], 95.00th=[ 400], 00:41:58.630 | 99.00th=[ 461], 99.50th=[ 1020], 99.90th=[ 1696], 99.95th=[ 1696], 00:41:58.630 | 99.99th=[ 1696] 00:41:58.630 bw ( KiB/s): min= 4096, max= 4096, per=20.28%, avg=4096.00, stdev= 0.00, samples=1 00:41:58.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:58.630 lat (usec) : 250=61.91%, 500=33.40%, 1000=0.19% 00:41:58.630 lat (msec) : 2=0.56%, 50=3.94% 00:41:58.630 cpu : usr=0.70%, sys=0.40%, ctx=534, majf=0, minf=1 00:41:58.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.630 job2: (groupid=0, jobs=1): err= 0: pid=3189293: Mon Nov 18 18:48:56 2024 00:41:58.630 read: IOPS=1322, BW=5291KiB/s (5418kB/s)(5296KiB/1001msec) 00:41:58.630 slat (nsec): min=5907, max=67221, avg=10957.52, stdev=5865.29 00:41:58.630 clat (usec): min=292, max=41332, avg=389.69, stdev=1127.54 00:41:58.630 lat (usec): min=298, max=41351, avg=400.65, stdev=1127.85 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:41:58.630 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:41:58.630 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 486], 00:41:58.630 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 693], 99.95th=[41157], 00:41:58.630 | 99.99th=[41157] 00:41:58.630 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:58.630 slat (nsec): min=7614, max=52472, avg=12017.54, stdev=6062.49 00:41:58.630 clat (usec): min=213, max=501, avg=287.58, stdev=51.03 00:41:58.630 lat (usec): min=222, max=510, avg=299.60, stdev=49.80 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:41:58.630 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 297], 00:41:58.630 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 371], 00:41:58.630 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 502], 00:41:58.630 | 99.99th=[ 502] 00:41:58.630 bw ( KiB/s): min= 8016, max= 8016, per=39.69%, avg=8016.00, stdev= 0.00, samples=1 00:41:58.630 iops : min= 2004, max= 2004, avg=2004.00, stdev= 0.00, samples=1 00:41:58.630 lat (usec) : 250=15.91%, 500=82.34%, 750=1.71% 00:41:58.630 lat (msec) : 50=0.03% 00:41:58.630 cpu : usr=2.20%, sys=4.70%, ctx=2861, majf=0, minf=1 00:41:58.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 issued rwts: total=1324,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.630 job3: (groupid=0, jobs=1): err= 0: pid=3189294: Mon Nov 18 18:48:56 2024 00:41:58.630 read: IOPS=1662, BW=6648KiB/s (6808kB/s)(6648KiB/1000msec) 00:41:58.630 slat (nsec): min=6377, max=40760, avg=7463.38, stdev=2034.70 00:41:58.630 clat (usec): min=265, max=431, avg=297.85, stdev=14.81 00:41:58.630 lat (usec): min=272, max=439, avg=305.31, stdev=15.17 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:41:58.630 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:41:58.630 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 322], 00:41:58.630 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 433], 00:41:58.630 | 99.99th=[ 433] 00:41:58.630 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:41:58.630 slat (nsec): min=8103, max=60528, avg=9935.99, stdev=2816.03 00:41:58.630 clat (usec): min=193, max=507, avg=226.13, stdev=33.39 00:41:58.630 lat (usec): min=202, max=517, avg=236.07, stdev=34.30 00:41:58.630 clat percentiles (usec): 00:41:58.630 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:41:58.630 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:41:58.630 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 314], 00:41:58.630 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 400], 99.95th=[ 416], 00:41:58.630 | 99.99th=[ 506] 00:41:58.630 bw ( KiB/s): min= 8192, max= 8192, per=40.56%, avg=8192.00, stdev= 0.00, samples=1 00:41:58.630 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:58.630 lat (usec) : 250=49.54%, 500=50.43%, 750=0.03% 00:41:58.630 cpu : usr=2.80%, sys=3.80%, ctx=3711, majf=0, minf=1 00:41:58.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.630 issued rwts: total=1662,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.630 00:41:58.630 Run status group 0 (all jobs): 00:41:58.630 READ: bw=13.6MiB/s (14.3MB/s), 83.7KiB/s-6648KiB/s (85.8kB/s-6808kB/s), io=13.8MiB (14.5MB), run=1000-1014msec 00:41:58.630 WRITE: bw=19.7MiB/s (20.7MB/s), 2042KiB/s-8192KiB/s (2091kB/s-8389kB/s), io=20.0MiB (21.0MB), run=1000-1014msec 00:41:58.630 00:41:58.630 Disk stats (read/write): 00:41:58.630 nvme0n1: ios=570/1024, merge=0/0, ticks=541/316, in_queue=857, util=87.27% 00:41:58.630 nvme0n2: ios=66/512, merge=0/0, ticks=850/124, in_queue=974, util=89.94% 00:41:58.630 nvme0n3: ios=1081/1417, merge=0/0, ticks=557/397, in_queue=954, util=93.74% 00:41:58.630 nvme0n4: ios=1599/1564, merge=0/0, ticks=985/345, in_queue=1330, util=95.80% 00:41:58.630 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:58.630 [global] 00:41:58.630 thread=1 00:41:58.630 invalidate=1 00:41:58.630 rw=randwrite 00:41:58.630 time_based=1 00:41:58.630 runtime=1 00:41:58.630 ioengine=libaio 00:41:58.630 direct=1 00:41:58.630 bs=4096 00:41:58.630 iodepth=1 00:41:58.630 norandommap=0 00:41:58.630 numjobs=1 00:41:58.630 00:41:58.630 verify_dump=1 00:41:58.630 verify_backlog=512 00:41:58.630 verify_state_save=0 00:41:58.630 do_verify=1 00:41:58.630 verify=crc32c-intel 00:41:58.630 [job0] 00:41:58.630 filename=/dev/nvme0n1 00:41:58.630 [job1] 00:41:58.630 filename=/dev/nvme0n2 00:41:58.630 [job2] 00:41:58.630 filename=/dev/nvme0n3 00:41:58.630 [job3] 00:41:58.630 filename=/dev/nvme0n4 00:41:58.630 Could not set queue depth (nvme0n1) 00:41:58.630 Could not set queue depth (nvme0n2) 00:41:58.630 Could not set queue depth (nvme0n3) 00:41:58.630 Could not set queue depth (nvme0n4) 00:41:58.886 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.886 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.886 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.886 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.886 fio-3.35 00:41:58.886 Starting 4 threads 00:42:00.258 00:42:00.258 job0: (groupid=0, jobs=1): err= 0: pid=3189535: Mon Nov 18 18:48:58 2024 00:42:00.258 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:42:00.258 slat (nsec): min=7203, max=27146, avg=13664.59, stdev=4464.23 00:42:00.258 clat (usec): min=40453, max=41028, avg=40955.53, stdev=115.32 00:42:00.258 lat (usec): min=40460, max=41039, avg=40969.19, stdev=116.45 00:42:00.258 clat percentiles (usec): 00:42:00.258 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:00.258 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:00.258 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:00.258 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:00.258 | 99.99th=[41157] 00:42:00.258 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:42:00.258 slat (nsec): min=7322, max=46058, avg=14370.35, stdev=7159.61 00:42:00.258 clat (usec): min=176, max=408, avg=228.75, stdev=29.46 00:42:00.258 lat (usec): min=183, max=443, avg=243.12, stdev=34.21 00:42:00.258 clat percentiles (usec): 00:42:00.258 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:42:00.258 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 235], 00:42:00.258 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 277], 00:42:00.258 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 408], 99.95th=[ 408], 00:42:00.258 | 99.99th=[ 408] 00:42:00.258 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.258 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.258 lat (usec) : 250=72.66%, 500=23.22% 00:42:00.258 lat (msec) : 50=4.12% 00:42:00.258 cpu : usr=0.58%, sys=0.88%, ctx=534, majf=0, minf=1 00:42:00.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.258 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.258 job1: (groupid=0, jobs=1): err= 0: pid=3189547: Mon Nov 18 18:48:58 2024 00:42:00.258 read: IOPS=19, BW=79.9KiB/s (81.8kB/s)(80.0KiB/1001msec) 00:42:00.258 slat (nsec): min=8332, max=13517, avg=12386.35, stdev=1079.06 00:42:00.258 clat (usec): min=40957, max=41397, avg=41003.84, stdev=93.46 00:42:00.258 lat (usec): min=40970, max=41405, avg=41016.23, stdev=92.49 00:42:00.258 clat percentiles (usec): 00:42:00.258 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:00.258 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:00.258 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:00.258 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:00.258 | 99.99th=[41157] 00:42:00.258 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:42:00.258 slat (nsec): min=7265, max=47845, avg=16655.56, stdev=8689.06 00:42:00.258 clat (usec): min=205, max=702, avg=331.06, stdev=59.21 00:42:00.258 lat (usec): min=215, max=723, avg=347.72, stdev=57.12 00:42:00.258 clat percentiles (usec): 00:42:00.258 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 265], 20.00th=[ 285], 00:42:00.258 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:42:00.258 | 70.00th=[ 355], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 420], 00:42:00.258 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 701], 99.95th=[ 701], 00:42:00.258 | 99.99th=[ 701] 00:42:00.258 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.258 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.258 lat (usec) : 250=6.39%, 500=89.10%, 750=0.75% 00:42:00.258 lat (msec) : 50=3.76% 00:42:00.258 cpu : usr=0.50%, sys=1.20%, ctx=532, majf=0, minf=1 00:42:00.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.258 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.258 job2: (groupid=0, jobs=1): err= 0: pid=3189586: Mon Nov 18 18:48:58 2024 00:42:00.258 read: IOPS=123, BW=493KiB/s (505kB/s)(500KiB/1014msec) 00:42:00.258 slat (nsec): min=6227, max=28016, avg=9049.02, stdev=5339.28 00:42:00.258 clat (usec): min=260, max=41335, avg=6817.74, stdev=14968.20 00:42:00.258 lat (usec): min=266, max=41344, avg=6826.79, stdev=14972.27 00:42:00.258 clat percentiles (usec): 00:42:00.258 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:42:00.258 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 322], 00:42:00.258 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[41157], 95.00th=[41157], 00:42:00.259 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:00.259 | 99.99th=[41157] 00:42:00.259 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:42:00.259 slat (nsec): min=7742, max=45935, avg=16306.72, stdev=8169.76 00:42:00.259 clat (usec): min=209, max=490, avg=292.39, stdev=51.02 00:42:00.259 lat (usec): min=218, max=509, avg=308.70, stdev=54.67 00:42:00.259 clat percentiles (usec): 00:42:00.259 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 247], 00:42:00.259 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 302], 00:42:00.259 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 400], 00:42:00.259 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 490], 99.95th=[ 490], 00:42:00.259 | 99.99th=[ 490] 00:42:00.259 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.259 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.259 lat (usec) : 250=18.68%, 500=78.02%, 750=0.16% 00:42:00.259 lat (msec) : 50=3.14% 00:42:00.259 cpu : usr=0.99%, sys=0.79%, ctx=638, majf=0, minf=1 00:42:00.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.259 issued rwts: total=125,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.259 job3: (groupid=0, jobs=1): err= 0: pid=3189593: Mon Nov 18 18:48:58 2024 00:42:00.259 read: IOPS=22, BW=90.7KiB/s (92.9kB/s)(92.0KiB/1014msec) 00:42:00.259 slat (nsec): min=8613, max=28517, avg=18938.52, stdev=7299.75 00:42:00.259 clat (usec): min=355, max=41044, avg=36461.34, stdev=12282.48 00:42:00.259 lat (usec): min=372, max=41059, avg=36480.28, stdev=12284.35 00:42:00.259 clat percentiles (usec): 00:42:00.259 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[18744], 20.00th=[40633], 00:42:00.259 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:00.259 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:00.259 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:00.259 | 99.99th=[41157] 00:42:00.259 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:42:00.259 slat (nsec): min=8062, max=54408, avg=16228.22, stdev=8391.63 00:42:00.259 clat (usec): min=218, max=524, avg=320.46, stdev=46.91 00:42:00.259 lat (usec): min=228, max=534, avg=336.69, stdev=47.94 00:42:00.259 clat percentiles (usec): 00:42:00.259 | 1.00th=[ 229], 5.00th=[ 247], 10.00th=[ 265], 20.00th=[ 285], 00:42:00.259 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:42:00.259 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 404], 00:42:00.259 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 523], 99.95th=[ 523], 00:42:00.259 | 99.99th=[ 523] 00:42:00.259 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.259 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.259 lat (usec) : 250=6.54%, 500=89.16%, 750=0.37% 00:42:00.259 lat (msec) : 20=0.19%, 50=3.74% 00:42:00.259 cpu : usr=0.59%, sys=1.09%, ctx=536, majf=0, minf=1 00:42:00.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.259 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.259 00:42:00.259 Run status group 0 (all jobs): 00:42:00.259 READ: bw=739KiB/s (757kB/s), 79.9KiB/s-493KiB/s (81.8kB/s-505kB/s), io=760KiB (778kB), run=1001-1028msec 00:42:00.259 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2046KiB/s (2040kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1028msec 00:42:00.259 00:42:00.259 Disk stats (read/write): 00:42:00.259 nvme0n1: ios=59/512, merge=0/0, ticks=972/111, in_queue=1083, util=94.39% 00:42:00.259 nvme0n2: ios=65/512, merge=0/0, ticks=689/168, in_queue=857, util=87.36% 00:42:00.259 nvme0n3: ios=177/512, merge=0/0, ticks=1469/137, in_queue=1606, util=93.04% 00:42:00.259 nvme0n4: ios=77/512, merge=0/0, ticks=987/165, in_queue=1152, util=96.58% 00:42:00.259 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:00.259 [global] 00:42:00.259 thread=1 00:42:00.259 invalidate=1 00:42:00.259 rw=write 00:42:00.259 time_based=1 00:42:00.259 runtime=1 00:42:00.259 ioengine=libaio 00:42:00.259 direct=1 00:42:00.259 bs=4096 00:42:00.259 iodepth=128 00:42:00.259 norandommap=0 00:42:00.259 numjobs=1 00:42:00.259 00:42:00.259 verify_dump=1 00:42:00.259 verify_backlog=512 00:42:00.259 verify_state_save=0 00:42:00.259 do_verify=1 00:42:00.259 verify=crc32c-intel 00:42:00.259 [job0] 00:42:00.259 filename=/dev/nvme0n1 00:42:00.259 [job1] 00:42:00.259 filename=/dev/nvme0n2 00:42:00.259 [job2] 00:42:00.259 filename=/dev/nvme0n3 00:42:00.259 [job3] 00:42:00.259 filename=/dev/nvme0n4 00:42:00.259 Could not set queue depth (nvme0n1) 00:42:00.259 Could not set queue depth (nvme0n2) 00:42:00.259 Could not set queue depth (nvme0n3) 00:42:00.259 Could not set queue depth (nvme0n4) 00:42:00.522 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.522 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.522 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.522 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:00.522 fio-3.35 00:42:00.522 Starting 4 threads 00:42:01.896 00:42:01.896 job0: (groupid=0, jobs=1): err= 0: pid=3189866: Mon Nov 18 18:48:59 2024 00:42:01.896 read: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1011msec) 00:42:01.896 slat (usec): min=2, max=13457, avg=110.39, stdev=919.54 00:42:01.896 clat (usec): min=3903, max=31807, avg=14856.70, stdev=3849.92 00:42:01.896 lat (usec): min=4663, max=33019, avg=14967.09, stdev=3910.43 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11863], 20.00th=[12518], 00:42:01.896 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:42:01.896 | 70.00th=[14484], 80.00th=[17695], 90.00th=[21103], 95.00th=[23200], 00:42:01.896 | 99.00th=[27132], 99.50th=[30278], 99.90th=[30802], 99.95th=[30802], 00:42:01.896 | 99.99th=[31851] 00:42:01.896 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:42:01.896 slat (usec): min=4, max=13087, avg=100.05, stdev=792.05 00:42:01.896 clat (usec): min=3531, max=27187, avg=13396.17, stdev=2889.05 00:42:01.896 lat (usec): min=3541, max=27236, avg=13496.22, stdev=2962.48 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10945], 00:42:01.896 | 30.00th=[12387], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:42:01.896 | 70.00th=[14484], 80.00th=[15008], 90.00th=[16450], 95.00th=[18744], 00:42:01.896 | 99.00th=[20841], 99.50th=[23200], 99.90th=[26870], 99.95th=[27132], 00:42:01.896 | 99.99th=[27132] 00:42:01.896 bw ( KiB/s): min=18144, max=18757, per=27.41%, avg=18450.50, stdev=433.46, samples=2 00:42:01.896 iops : min= 4536, max= 4689, avg=4612.50, stdev=108.19, samples=2 00:42:01.896 lat (msec) : 4=0.14%, 10=7.48%, 20=85.15%, 50=7.23% 00:42:01.896 cpu : usr=5.84%, sys=10.99%, ctx=260, majf=0, minf=1 00:42:01.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.896 issued rwts: total=4467,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.896 job1: (groupid=0, jobs=1): err= 0: pid=3189867: Mon Nov 18 18:48:59 2024 00:42:01.896 read: IOPS=4256, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1007msec) 00:42:01.896 slat (usec): min=3, max=11073, avg=110.01, stdev=695.90 00:42:01.896 clat (usec): min=3174, max=24160, avg=14399.46, stdev=2437.72 00:42:01.896 lat (usec): min=9037, max=27579, avg=14509.47, stdev=2477.94 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:42:01.896 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14484], 00:42:01.896 | 70.00th=[15008], 80.00th=[16057], 90.00th=[17695], 95.00th=[19268], 00:42:01.896 | 99.00th=[23725], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:42:01.896 | 99.99th=[24249] 00:42:01.896 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:42:01.896 slat (usec): min=4, max=6768, avg=104.34, stdev=610.42 00:42:01.896 clat (usec): min=7517, max=22404, avg=14257.81, stdev=1476.80 00:42:01.896 lat (usec): min=7574, max=22446, avg=14362.15, stdev=1562.24 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[10552], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:42:01.896 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:42:01.896 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[16909], 00:42:01.896 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20579], 99.95th=[21365], 00:42:01.896 | 99.99th=[22414] 00:42:01.896 bw ( KiB/s): min=18256, max=18608, per=27.39%, avg=18432.00, stdev=248.90, samples=2 00:42:01.896 iops : min= 4564, max= 4652, avg=4608.00, stdev=62.23, samples=2 00:42:01.896 lat (msec) : 4=0.01%, 10=0.89%, 20=97.44%, 50=1.66% 00:42:01.896 cpu : usr=6.26%, sys=10.34%, ctx=363, majf=0, minf=2 00:42:01.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:01.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.896 issued rwts: total=4286,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.896 job2: (groupid=0, jobs=1): err= 0: pid=3189868: Mon Nov 18 18:48:59 2024 00:42:01.896 read: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1008msec) 00:42:01.896 slat (usec): min=3, max=14664, avg=124.08, stdev=1019.52 00:42:01.896 clat (usec): min=2332, max=30604, avg=16374.13, stdev=3720.57 00:42:01.896 lat (usec): min=5317, max=30621, avg=16498.21, stdev=3799.89 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13173], 20.00th=[14091], 00:42:01.896 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:42:01.896 | 70.00th=[16450], 80.00th=[17957], 90.00th=[22676], 95.00th=[25035], 00:42:01.896 | 99.00th=[28181], 99.50th=[29230], 99.90th=[30540], 99.95th=[30540], 00:42:01.896 | 99.99th=[30540] 00:42:01.896 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:42:01.896 slat (usec): min=3, max=13139, avg=112.82, stdev=912.13 00:42:01.896 clat (usec): min=2953, max=30572, avg=15070.26, stdev=3620.42 00:42:01.896 lat (usec): min=2977, max=30583, avg=15183.08, stdev=3688.64 00:42:01.896 clat percentiles (usec): 00:42:01.896 | 1.00th=[ 6980], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[12125], 00:42:01.896 | 30.00th=[14222], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:42:01.896 | 70.00th=[16057], 80.00th=[16712], 90.00th=[21103], 95.00th=[21627], 00:42:01.896 | 99.00th=[26346], 99.50th=[26608], 99.90th=[28443], 99.95th=[28705], 00:42:01.896 | 99.99th=[30540] 00:42:01.896 bw ( KiB/s): min=16384, max=16384, per=24.34%, avg=16384.00, stdev= 0.00, samples=2 00:42:01.896 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:42:01.896 lat (msec) : 4=0.12%, 10=6.56%, 20=80.23%, 50=13.09% 00:42:01.896 cpu : usr=6.16%, sys=9.83%, ctx=201, majf=0, minf=1 00:42:01.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.897 issued rwts: total=3988,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.897 job3: (groupid=0, jobs=1): err= 0: pid=3189869: Mon Nov 18 18:48:59 2024 00:42:01.897 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:42:01.897 slat (usec): min=3, max=8337, avg=132.82, stdev=815.61 00:42:01.897 clat (usec): min=10402, max=30062, avg=17222.55, stdev=2911.39 00:42:01.897 lat (usec): min=10413, max=30068, avg=17355.37, stdev=2941.19 00:42:01.897 clat percentiles (usec): 00:42:01.897 | 1.00th=[11600], 5.00th=[13173], 10.00th=[14222], 20.00th=[14746], 00:42:01.897 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16909], 60.00th=[17433], 00:42:01.897 | 70.00th=[18744], 80.00th=[19792], 90.00th=[21103], 95.00th=[22152], 00:42:01.897 | 99.00th=[25297], 99.50th=[25560], 99.90th=[28705], 99.95th=[30016], 00:42:01.897 | 99.99th=[30016] 00:42:01.897 write: IOPS=3662, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1010msec); 0 zone resets 00:42:01.897 slat (usec): min=3, max=20077, avg=130.83, stdev=794.71 00:42:01.897 clat (usec): min=5715, max=40080, avg=17799.55, stdev=4526.86 00:42:01.897 lat (usec): min=8785, max=40098, avg=17930.38, stdev=4573.04 00:42:01.897 clat percentiles (usec): 00:42:01.897 | 1.00th=[10945], 5.00th=[13042], 10.00th=[15139], 20.00th=[16450], 00:42:01.897 | 30.00th=[16581], 40.00th=[16712], 50.00th=[16909], 60.00th=[17433], 00:42:01.897 | 70.00th=[17695], 80.00th=[17957], 90.00th=[20055], 95.00th=[25035], 00:42:01.897 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:42:01.897 | 99.99th=[40109] 00:42:01.897 bw ( KiB/s): min=13128, max=15544, per=21.30%, avg=14336.00, stdev=1708.37, samples=2 00:42:01.897 iops : min= 3282, max= 3886, avg=3584.00, stdev=427.09, samples=2 00:42:01.897 lat (msec) : 10=0.19%, 20=85.62%, 50=14.18% 00:42:01.897 cpu : usr=5.95%, sys=8.23%, ctx=340, majf=0, minf=1 00:42:01.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:42:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.897 issued rwts: total=3584,3699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.897 00:42:01.897 Run status group 0 (all jobs): 00:42:01.897 READ: bw=63.1MiB/s (66.1MB/s), 13.9MiB/s-17.3MiB/s (14.5MB/s-18.1MB/s), io=63.8MiB (66.9MB), run=1007-1011msec 00:42:01.897 WRITE: bw=65.7MiB/s (68.9MB/s), 14.3MiB/s-17.9MiB/s (15.0MB/s-18.7MB/s), io=66.4MiB (69.7MB), run=1007-1011msec 00:42:01.897 00:42:01.897 Disk stats (read/write): 00:42:01.897 nvme0n1: ios=3604/4055, merge=0/0, ticks=51602/51949, in_queue=103551, util=98.80% 00:42:01.897 nvme0n2: ios=3633/4053, merge=0/0, ticks=24401/25947, in_queue=50348, util=92.99% 00:42:01.897 nvme0n3: ios=3272/3584, merge=0/0, ticks=50984/50846, in_queue=101830, util=99.38% 00:42:01.897 nvme0n4: ios=3123/3135, merge=0/0, ticks=25634/24691, in_queue=50325, util=91.30% 00:42:01.897 18:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:01.897 [global] 00:42:01.897 thread=1 00:42:01.897 invalidate=1 00:42:01.897 rw=randwrite 00:42:01.897 time_based=1 00:42:01.897 runtime=1 00:42:01.897 ioengine=libaio 00:42:01.897 direct=1 00:42:01.897 bs=4096 00:42:01.897 iodepth=128 00:42:01.897 norandommap=0 00:42:01.897 numjobs=1 00:42:01.897 00:42:01.897 verify_dump=1 00:42:01.897 verify_backlog=512 00:42:01.897 verify_state_save=0 00:42:01.897 do_verify=1 00:42:01.897 verify=crc32c-intel 00:42:01.897 [job0] 00:42:01.897 filename=/dev/nvme0n1 00:42:01.897 [job1] 00:42:01.897 filename=/dev/nvme0n2 00:42:01.897 [job2] 00:42:01.897 filename=/dev/nvme0n3 00:42:01.897 [job3] 00:42:01.897 filename=/dev/nvme0n4 00:42:01.897 Could not set queue depth (nvme0n1) 00:42:01.897 Could not set queue depth (nvme0n2) 00:42:01.897 Could not set queue depth (nvme0n3) 00:42:01.897 Could not set queue depth (nvme0n4) 00:42:01.897 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:01.897 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:01.897 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:01.897 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:01.897 fio-3.35 00:42:01.897 Starting 4 threads 00:42:03.272 00:42:03.272 job0: (groupid=0, jobs=1): err= 0: pid=3190093: Mon Nov 18 18:49:01 2024 00:42:03.272 read: IOPS=1749, BW=6998KiB/s (7166kB/s)(7124KiB/1018msec) 00:42:03.272 slat (usec): min=2, max=16996, avg=208.91, stdev=1352.81 00:42:03.272 clat (usec): min=8628, max=84439, avg=25396.97, stdev=12579.64 00:42:03.272 lat (usec): min=8633, max=84445, avg=25605.87, stdev=12702.88 00:42:03.272 clat percentiles (usec): 00:42:03.272 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[14484], 20.00th=[16188], 00:42:03.272 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21627], 60.00th=[23725], 00:42:03.272 | 70.00th=[25297], 80.00th=[29754], 90.00th=[42730], 95.00th=[50594], 00:42:03.272 | 99.00th=[76022], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:42:03.272 | 99.99th=[84411] 00:42:03.272 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:42:03.272 slat (usec): min=3, max=32981, avg=295.62, stdev=1818.47 00:42:03.272 clat (usec): min=5725, max=99139, avg=39212.31, stdev=20432.76 00:42:03.272 lat (usec): min=5729, max=99159, avg=39507.93, stdev=20603.68 00:42:03.272 clat percentiles (usec): 00:42:03.272 | 1.00th=[11600], 5.00th=[13566], 10.00th=[18220], 20.00th=[21627], 00:42:03.272 | 30.00th=[22676], 40.00th=[27132], 50.00th=[33162], 60.00th=[38536], 00:42:03.272 | 70.00th=[49021], 80.00th=[66323], 90.00th=[69731], 95.00th=[73925], 00:42:03.272 | 99.00th=[80217], 99.50th=[80217], 99.90th=[85459], 99.95th=[87557], 00:42:03.272 | 99.99th=[99091] 00:42:03.272 bw ( KiB/s): min= 8192, max= 8192, per=18.62%, avg=8192.00, stdev= 0.00, samples=2 00:42:03.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:42:03.272 lat (msec) : 10=1.36%, 20=22.80%, 50=58.27%, 100=17.58% 00:42:03.272 cpu : usr=1.57%, sys=1.57%, ctx=159, majf=0, minf=1 00:42:03.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:42:03.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.272 issued rwts: total=1781,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.272 job1: (groupid=0, jobs=1): err= 0: pid=3190094: Mon Nov 18 18:49:01 2024 00:42:03.272 read: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec) 00:42:03.272 slat (usec): min=3, max=29048, avg=274.49, stdev=1762.59 00:42:03.272 clat (usec): min=6390, max=75718, avg=37099.89, stdev=20132.65 00:42:03.272 lat (usec): min=6396, max=75725, avg=37374.38, stdev=20178.91 00:42:03.272 clat percentiles (usec): 00:42:03.272 | 1.00th=[ 7767], 5.00th=[10552], 10.00th=[12518], 20.00th=[14746], 00:42:03.272 | 30.00th=[27132], 40.00th=[32113], 50.00th=[34866], 60.00th=[36963], 00:42:03.273 | 70.00th=[39060], 80.00th=[57934], 90.00th=[71828], 95.00th=[74974], 00:42:03.273 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:42:03.273 | 99.99th=[76022] 00:42:03.273 write: IOPS=1952, BW=7812KiB/s (7999kB/s)(7960KiB/1019msec); 0 zone resets 00:42:03.273 slat (usec): min=3, max=33738, avg=292.03, stdev=2029.77 00:42:03.273 clat (usec): min=1000, max=106057, avg=33307.58, stdev=20158.40 00:42:03.273 lat (msec): min=5, max=106, avg=33.60, stdev=20.27 00:42:03.273 clat percentiles (msec): 00:42:03.273 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 16], 00:42:03.273 | 30.00th=[ 19], 40.00th=[ 30], 50.00th=[ 34], 60.00th=[ 34], 00:42:03.273 | 70.00th=[ 35], 80.00th=[ 44], 90.00th=[ 65], 95.00th=[ 72], 00:42:03.273 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:42:03.273 | 99.99th=[ 107] 00:42:03.273 bw ( KiB/s): min= 4096, max=10800, per=16.93%, avg=7448.00, stdev=4740.44, samples=2 00:42:03.273 iops : min= 1024, max= 2700, avg=1862.00, stdev=1185.11, samples=2 00:42:03.273 lat (msec) : 2=0.03%, 10=3.23%, 20=26.49%, 50=47.90%, 100=21.47% 00:42:03.273 lat (msec) : 250=0.88% 00:42:03.273 cpu : usr=1.57%, sys=2.16%, ctx=153, majf=0, minf=1 00:42:03.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:42:03.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.273 issued rwts: total=1536,1990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.273 job2: (groupid=0, jobs=1): err= 0: pid=3190097: Mon Nov 18 18:49:01 2024 00:42:03.273 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1019msec) 00:42:03.273 slat (usec): min=2, max=45200, avg=177.87, stdev=1528.55 00:42:03.273 clat (usec): min=2837, max=96387, avg=23138.99, stdev=16131.48 00:42:03.273 lat (usec): min=2842, max=96406, avg=23316.86, stdev=16248.12 00:42:03.273 clat percentiles (usec): 00:42:03.273 | 1.00th=[ 6915], 5.00th=[ 8979], 10.00th=[11863], 20.00th=[13173], 00:42:03.273 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15664], 60.00th=[19792], 00:42:03.273 | 70.00th=[25822], 80.00th=[29230], 90.00th=[45876], 95.00th=[54264], 00:42:03.273 | 99.00th=[86508], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:42:03.273 | 99.99th=[95945] 00:42:03.273 write: IOPS=3517, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1019msec); 0 zone resets 00:42:03.273 slat (usec): min=3, max=17339, avg=117.29, stdev=972.88 00:42:03.273 clat (usec): min=510, max=41397, avg=15406.70, stdev=5891.25 00:42:03.273 lat (usec): min=921, max=41409, avg=15523.99, stdev=5974.80 00:42:03.273 clat percentiles (usec): 00:42:03.273 | 1.00th=[ 3785], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[11338], 00:42:03.273 | 30.00th=[12387], 40.00th=[13304], 50.00th=[14091], 60.00th=[15401], 00:42:03.273 | 70.00th=[15795], 80.00th=[19792], 90.00th=[25297], 95.00th=[27132], 00:42:03.273 | 99.00th=[30016], 99.50th=[38011], 99.90th=[39060], 99.95th=[41157], 00:42:03.273 | 99.99th=[41157] 00:42:03.273 bw ( KiB/s): min=12288, max=16376, per=32.58%, avg=14332.00, stdev=2890.65, samples=2 00:42:03.273 iops : min= 3072, max= 4094, avg=3583.00, stdev=722.66, samples=2 00:42:03.273 lat (usec) : 750=0.01%, 1000=0.03% 00:42:03.273 lat (msec) : 4=0.91%, 10=7.84%, 20=62.35%, 50=26.45%, 100=2.40% 00:42:03.273 cpu : usr=1.28%, sys=3.63%, ctx=225, majf=0, minf=1 00:42:03.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:03.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.273 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.273 job3: (groupid=0, jobs=1): err= 0: pid=3190098: Mon Nov 18 18:49:01 2024 00:42:03.273 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.2MiB/1019msec) 00:42:03.273 slat (usec): min=2, max=44387, avg=163.49, stdev=1273.08 00:42:03.273 clat (usec): min=6818, max=57759, avg=19880.58, stdev=10081.86 00:42:03.273 lat (usec): min=6821, max=57761, avg=20044.07, stdev=10134.95 00:42:03.273 clat percentiles (usec): 00:42:03.273 | 1.00th=[ 6980], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[14222], 00:42:03.273 | 30.00th=[14484], 40.00th=[14746], 50.00th=[16188], 60.00th=[18744], 00:42:03.273 | 70.00th=[19792], 80.00th=[26346], 90.00th=[29754], 95.00th=[42730], 00:42:03.273 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:42:03.273 | 99.99th=[57934] 00:42:03.273 write: IOPS=3517, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1019msec); 0 zone resets 00:42:03.273 slat (usec): min=2, max=16282, avg=131.41, stdev=782.59 00:42:03.273 clat (usec): min=496, max=77068, avg=18769.63, stdev=12666.08 00:42:03.273 lat (usec): min=511, max=77076, avg=18901.05, stdev=12728.99 00:42:03.273 clat percentiles (usec): 00:42:03.273 | 1.00th=[ 1975], 5.00th=[ 4490], 10.00th=[ 8291], 20.00th=[13173], 00:42:03.273 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15795], 60.00th=[16712], 00:42:03.273 | 70.00th=[17695], 80.00th=[21890], 90.00th=[25822], 95.00th=[54264], 00:42:03.273 | 99.00th=[70779], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:42:03.273 | 99.99th=[77071] 00:42:03.273 bw ( KiB/s): min=11608, max=16384, per=31.82%, avg=13996.00, stdev=3377.14, samples=2 00:42:03.273 iops : min= 2902, max= 4096, avg=3499.00, stdev=844.29, samples=2 00:42:03.273 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.01% 00:42:03.273 lat (msec) : 2=0.51%, 4=1.97%, 10=5.91%, 20=66.61%, 50=20.08% 00:42:03.273 lat (msec) : 100=4.84% 00:42:03.273 cpu : usr=1.87%, sys=3.34%, ctx=329, majf=0, minf=1 00:42:03.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:03.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.273 issued rwts: total=3115,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.273 00:42:03.273 Run status group 0 (all jobs): 00:42:03.273 READ: bw=36.9MiB/s (38.7MB/s), 6029KiB/s-12.3MiB/s (6174kB/s-12.9MB/s), io=37.6MiB (39.4MB), run=1018-1019msec 00:42:03.273 WRITE: bw=43.0MiB/s (45.0MB/s), 7812KiB/s-13.7MiB/s (7999kB/s-14.4MB/s), io=43.8MiB (45.9MB), run=1019-1019msec 00:42:03.273 00:42:03.273 Disk stats (read/write): 00:42:03.273 nvme0n1: ios=1559/1814, merge=0/0, ticks=22289/34678, in_queue=56967, util=97.90% 00:42:03.273 nvme0n2: ios=1489/1536, merge=0/0, ticks=17393/18703, in_queue=36096, util=100.00% 00:42:03.273 nvme0n3: ios=2610/2836, merge=0/0, ticks=39935/35996, in_queue=75931, util=94.99% 00:42:03.273 nvme0n4: ios=2965/3072, merge=0/0, ticks=23176/28029, in_queue=51205, util=97.05% 00:42:03.273 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:03.273 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3190240 00:42:03.273 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:03.273 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:03.273 [global] 00:42:03.273 thread=1 00:42:03.273 invalidate=1 00:42:03.273 rw=read 00:42:03.273 time_based=1 00:42:03.273 runtime=10 00:42:03.273 ioengine=libaio 00:42:03.273 direct=1 00:42:03.273 bs=4096 00:42:03.273 iodepth=1 00:42:03.273 norandommap=1 00:42:03.273 numjobs=1 00:42:03.273 00:42:03.273 [job0] 00:42:03.273 filename=/dev/nvme0n1 00:42:03.273 [job1] 00:42:03.273 filename=/dev/nvme0n2 00:42:03.273 [job2] 00:42:03.273 filename=/dev/nvme0n3 00:42:03.273 [job3] 00:42:03.273 filename=/dev/nvme0n4 00:42:03.273 Could not set queue depth (nvme0n1) 00:42:03.273 Could not set queue depth (nvme0n2) 00:42:03.273 Could not set queue depth (nvme0n3) 00:42:03.273 Could not set queue depth (nvme0n4) 00:42:03.273 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:03.273 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:03.273 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:03.273 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:03.273 fio-3.35 00:42:03.273 Starting 4 threads 00:42:06.555 18:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:06.555 18:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:06.555 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29696000, buflen=4096 00:42:06.555 fio: pid=3190332, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:06.812 18:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.812 18:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:06.812 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=22147072, buflen=4096 00:42:06.812 fio: pid=3190331, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:07.070 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=962560, buflen=4096 00:42:07.070 fio: pid=3190329, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:07.070 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:07.070 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:07.329 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=17596416, buflen=4096 00:42:07.329 fio: pid=3190330, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:42:07.329 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:07.329 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:07.329 00:42:07.329 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190329: Mon Nov 18 18:49:05 2024 00:42:07.329 read: IOPS=67, BW=269KiB/s (275kB/s)(940KiB/3500msec) 00:42:07.329 slat (usec): min=6, max=11441, avg=127.57, stdev=1038.98 00:42:07.329 clat (usec): min=255, max=41504, avg=14663.92, stdev=19439.43 00:42:07.329 lat (usec): min=263, max=41516, avg=14791.97, stdev=19385.25 00:42:07.329 clat percentiles (usec): 00:42:07.329 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 273], 00:42:07.329 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 367], 60.00th=[ 515], 00:42:07.329 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:07.329 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:42:07.329 | 99.99th=[41681] 00:42:07.329 bw ( KiB/s): min= 96, max= 1128, per=1.64%, avg=292.00, stdev=410.45, samples=6 00:42:07.329 iops : min= 24, max= 282, avg=73.00, stdev=102.61, samples=6 00:42:07.329 lat (usec) : 500=58.47%, 750=5.93% 00:42:07.329 lat (msec) : 50=35.17% 00:42:07.329 cpu : usr=0.06%, sys=0.11%, ctx=240, majf=0, minf=2 00:42:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 issued rwts: total=236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:07.329 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3190330: Mon Nov 18 18:49:05 2024 00:42:07.329 read: IOPS=1111, BW=4444KiB/s (4550kB/s)(16.8MiB/3867msec) 00:42:07.329 slat (usec): min=5, max=8893, avg=11.61, stdev=172.14 00:42:07.329 clat (usec): min=231, max=84706, avg=886.32, stdev=4926.50 00:42:07.329 lat (usec): min=240, max=84721, avg=896.32, stdev=4946.76 00:42:07.329 clat percentiles (usec): 00:42:07.329 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:42:07.329 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 326], 00:42:07.329 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 404], 00:42:07.329 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[44827], 00:42:07.329 | 99.99th=[84411] 00:42:07.329 bw ( KiB/s): min= 92, max=12256, per=27.56%, avg=4900.00, stdev=6009.95, samples=7 00:42:07.329 iops : min= 23, max= 3064, avg=1225.00, stdev=1502.49, samples=7 00:42:07.329 lat (usec) : 250=9.40%, 500=88.71%, 750=0.47% 00:42:07.329 lat (msec) : 20=0.02%, 50=1.33%, 100=0.05% 00:42:07.329 cpu : usr=0.62%, sys=1.47%, ctx=4302, majf=0, minf=1 00:42:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 issued rwts: total=4297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:07.329 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190331: Mon Nov 18 18:49:05 2024 00:42:07.329 read: IOPS=1674, BW=6696KiB/s (6857kB/s)(21.1MiB/3230msec) 00:42:07.329 slat (nsec): min=5780, max=65998, avg=9686.19, stdev=4916.82 00:42:07.329 clat (usec): min=220, max=41348, avg=580.95, stdev=3096.75 00:42:07.329 lat (usec): min=226, max=41354, avg=590.64, stdev=3097.45 00:42:07.329 clat percentiles (usec): 00:42:07.329 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:42:07.329 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 347], 00:42:07.329 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 445], 00:42:07.329 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:07.329 | 99.99th=[41157] 00:42:07.329 bw ( KiB/s): min= 936, max=12096, per=40.51%, avg=7202.67, stdev=4460.51, samples=6 00:42:07.329 iops : min= 234, max= 3024, avg=1800.67, stdev=1115.13, samples=6 00:42:07.329 lat (usec) : 250=0.54%, 500=96.38%, 750=2.37%, 1000=0.07% 00:42:07.329 lat (msec) : 2=0.04%, 50=0.59% 00:42:07.329 cpu : usr=0.81%, sys=2.76%, ctx=5408, majf=0, minf=1 00:42:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:07.329 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190332: Mon Nov 18 18:49:05 2024 00:42:07.329 read: IOPS=2458, BW=9834KiB/s (10.1MB/s)(28.3MiB/2949msec) 00:42:07.329 slat (nsec): min=5865, max=48976, avg=9534.14, stdev=4646.67 00:42:07.329 clat (usec): min=262, max=45973, avg=390.87, stdev=1964.79 00:42:07.329 lat (usec): min=269, max=45992, avg=400.40, stdev=1965.25 00:42:07.329 clat percentiles (usec): 00:42:07.329 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:42:07.329 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:42:07.329 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:42:07.329 | 99.00th=[ 383], 99.50th=[ 461], 99.90th=[42206], 99.95th=[42206], 00:42:07.329 | 99.99th=[45876] 00:42:07.329 bw ( KiB/s): min= 96, max=13448, per=55.57%, avg=9880.00, stdev=5614.70, samples=5 00:42:07.329 iops : min= 24, max= 3362, avg=2470.00, stdev=1403.68, samples=5 00:42:07.329 lat (usec) : 500=99.59%, 750=0.07%, 1000=0.07% 00:42:07.329 lat (msec) : 2=0.04%, 50=0.22% 00:42:07.329 cpu : usr=1.32%, sys=3.73%, ctx=7252, majf=0, minf=2 00:42:07.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.329 issued rwts: total=7251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:07.329 00:42:07.329 Run status group 0 (all jobs): 00:42:07.329 READ: bw=17.4MiB/s (18.2MB/s), 269KiB/s-9834KiB/s (275kB/s-10.1MB/s), io=67.1MiB (70.4MB), run=2949-3867msec 00:42:07.329 00:42:07.329 Disk stats (read/write): 00:42:07.329 nvme0n1: ios=230/0, merge=0/0, ticks=3324/0, in_queue=3324, util=95.34% 00:42:07.329 nvme0n2: ios=4337/0, merge=0/0, ticks=4455/0, in_queue=4455, util=99.72% 00:42:07.329 nvme0n3: ios=5404/0, merge=0/0, ticks=3002/0, in_queue=3002, util=96.82% 00:42:07.329 nvme0n4: ios=7226/0, merge=0/0, ticks=3735/0, in_queue=3735, util=99.97% 00:42:07.588 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:07.588 18:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:08.153 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.153 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:08.412 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.412 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:08.670 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.670 18:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:08.928 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:08.928 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3190240 00:42:08.928 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:08.928 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:09.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:09.863 nvmf hotplug test: fio failed as expected 00:42:09.863 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:10.121 rmmod nvme_tcp 00:42:10.121 rmmod nvme_fabrics 00:42:10.121 rmmod nvme_keyring 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3188208 ']' 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3188208 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3188208 ']' 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3188208 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:10.121 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188208 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188208' 00:42:10.122 killing process with pid 3188208 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3188208 00:42:10.122 18:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3188208 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:11.496 18:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:13.398 00:42:13.398 real 0m26.694s 00:42:13.398 user 1m12.873s 00:42:13.398 sys 0m10.225s 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.398 ************************************ 00:42:13.398 END TEST nvmf_fio_target 00:42:13.398 ************************************ 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:13.398 ************************************ 00:42:13.398 START TEST nvmf_bdevio 00:42:13.398 ************************************ 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:13.398 * Looking for test storage... 00:42:13.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:42:13.398 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:13.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.657 --rc genhtml_branch_coverage=1 00:42:13.657 --rc genhtml_function_coverage=1 00:42:13.657 --rc genhtml_legend=1 00:42:13.657 --rc geninfo_all_blocks=1 00:42:13.657 --rc geninfo_unexecuted_blocks=1 00:42:13.657 00:42:13.657 ' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:13.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.657 --rc genhtml_branch_coverage=1 00:42:13.657 --rc genhtml_function_coverage=1 00:42:13.657 --rc genhtml_legend=1 00:42:13.657 --rc geninfo_all_blocks=1 00:42:13.657 --rc geninfo_unexecuted_blocks=1 00:42:13.657 00:42:13.657 ' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:13.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.657 --rc genhtml_branch_coverage=1 00:42:13.657 --rc genhtml_function_coverage=1 00:42:13.657 --rc genhtml_legend=1 00:42:13.657 --rc geninfo_all_blocks=1 00:42:13.657 --rc geninfo_unexecuted_blocks=1 00:42:13.657 00:42:13.657 ' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:13.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.657 --rc genhtml_branch_coverage=1 00:42:13.657 --rc genhtml_function_coverage=1 00:42:13.657 --rc genhtml_legend=1 00:42:13.657 --rc geninfo_all_blocks=1 00:42:13.657 --rc geninfo_unexecuted_blocks=1 00:42:13.657 00:42:13.657 ' 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:13.657 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:13.658 18:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.560 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:15.560 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:15.560 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:15.560 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:15.560 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:15.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:15.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:15.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:15.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:15.561 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:15.562 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:15.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:15.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:42:15.820 00:42:15.820 --- 10.0.0.2 ping statistics --- 00:42:15.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:15.820 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:15.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:15.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:42:15.820 00:42:15.820 --- 10.0.0.1 ping statistics --- 00:42:15.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:15.820 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.820 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3193213 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3193213 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3193213 ']' 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.821 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.821 [2024-11-18 18:49:14.077490] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:15.821 [2024-11-18 18:49:14.079938] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:15.821 [2024-11-18 18:49:14.080051] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:16.079 [2024-11-18 18:49:14.232131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.079 [2024-11-18 18:49:14.365774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.079 [2024-11-18 18:49:14.365842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.079 [2024-11-18 18:49:14.365881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.079 [2024-11-18 18:49:14.365904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.079 [2024-11-18 18:49:14.365937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.079 [2024-11-18 18:49:14.369000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:16.079 [2024-11-18 18:49:14.369090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:16.079 [2024-11-18 18:49:14.369172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.079 [2024-11-18 18:49:14.369210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:16.646 [2024-11-18 18:49:14.734517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:16.646 [2024-11-18 18:49:14.743964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:16.646 [2024-11-18 18:49:14.744163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:16.646 [2024-11-18 18:49:14.745037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:16.646 [2024-11-18 18:49:14.745405] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 [2024-11-18 18:49:15.110381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 Malloc0 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:16.904 [2024-11-18 18:49:15.234641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:16.904 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:17.163 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:17.163 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:17.163 { 00:42:17.163 "params": { 00:42:17.163 "name": "Nvme$subsystem", 00:42:17.163 "trtype": "$TEST_TRANSPORT", 00:42:17.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:17.163 "adrfam": "ipv4", 00:42:17.163 "trsvcid": "$NVMF_PORT", 00:42:17.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:17.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:17.163 "hdgst": ${hdgst:-false}, 00:42:17.163 "ddgst": ${ddgst:-false} 00:42:17.164 }, 00:42:17.164 "method": "bdev_nvme_attach_controller" 00:42:17.164 } 00:42:17.164 EOF 00:42:17.164 )") 00:42:17.164 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:17.164 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:17.164 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:17.164 18:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:17.164 "params": { 00:42:17.164 "name": "Nvme1", 00:42:17.164 "trtype": "tcp", 00:42:17.164 "traddr": "10.0.0.2", 00:42:17.164 "adrfam": "ipv4", 00:42:17.164 "trsvcid": "4420", 00:42:17.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:17.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:17.164 "hdgst": false, 00:42:17.164 "ddgst": false 00:42:17.164 }, 00:42:17.164 "method": "bdev_nvme_attach_controller" 00:42:17.164 }' 00:42:17.164 [2024-11-18 18:49:15.318541] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:17.164 [2024-11-18 18:49:15.318704] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193365 ] 00:42:17.164 [2024-11-18 18:49:15.457814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:17.421 [2024-11-18 18:49:15.593432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.421 [2024-11-18 18:49:15.593482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.421 [2024-11-18 18:49:15.593489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:17.987 I/O targets: 00:42:17.987 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:17.987 00:42:17.987 00:42:17.987 CUnit - A unit testing framework for C - Version 2.1-3 00:42:17.987 http://cunit.sourceforge.net/ 00:42:17.987 00:42:17.987 00:42:17.987 Suite: bdevio tests on: Nvme1n1 00:42:17.987 Test: blockdev write read block ...passed 00:42:17.987 Test: blockdev write zeroes read block ...passed 00:42:17.987 Test: blockdev write zeroes read no split ...passed 00:42:17.987 Test: blockdev write zeroes read split ...passed 00:42:17.987 Test: blockdev write zeroes read split partial ...passed 00:42:17.987 Test: blockdev reset ...[2024-11-18 18:49:16.216205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:17.987 [2024-11-18 18:49:16.216394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:18.245 [2024-11-18 18:49:16.351186] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:18.245 passed 00:42:18.245 Test: blockdev write read 8 blocks ...passed 00:42:18.245 Test: blockdev write read size > 128k ...passed 00:42:18.245 Test: blockdev write read invalid size ...passed 00:42:18.245 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:18.245 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:18.245 Test: blockdev write read max offset ...passed 00:42:18.245 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:18.245 Test: blockdev writev readv 8 blocks ...passed 00:42:18.245 Test: blockdev writev readv 30 x 1block ...passed 00:42:18.245 Test: blockdev writev readv block ...passed 00:42:18.245 Test: blockdev writev readv size > 128k ...passed 00:42:18.245 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:18.245 Test: blockdev comparev and writev ...[2024-11-18 18:49:16.565630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.245 [2024-11-18 18:49:16.565682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:18.245 [2024-11-18 18:49:16.565731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.245 [2024-11-18 18:49:16.565759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:18.245 [2024-11-18 18:49:16.566323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.566357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:18.246 [2024-11-18 18:49:16.566399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.566425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:18.246 [2024-11-18 18:49:16.566953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.566986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:18.246 [2024-11-18 18:49:16.567023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.567049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:18.246 [2024-11-18 18:49:16.567604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.567645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:18.246 [2024-11-18 18:49:16.567684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:18.246 [2024-11-18 18:49:16.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:18.504 passed 00:42:18.504 Test: blockdev nvme passthru rw ...passed 00:42:18.504 Test: blockdev nvme passthru vendor specific ...[2024-11-18 18:49:16.649953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:18.504 [2024-11-18 18:49:16.649995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:18.504 [2024-11-18 18:49:16.650227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:18.504 [2024-11-18 18:49:16.650261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:18.504 [2024-11-18 18:49:16.650472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:18.504 [2024-11-18 18:49:16.650505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:18.504 [2024-11-18 18:49:16.650725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:18.504 [2024-11-18 18:49:16.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:18.504 passed 00:42:18.504 Test: blockdev nvme admin passthru ...passed 00:42:18.504 Test: blockdev copy ...passed 00:42:18.504 00:42:18.504 Run Summary: Type Total Ran Passed Failed Inactive 00:42:18.504 suites 1 1 n/a 0 0 00:42:18.504 tests 23 23 23 0 0 00:42:18.504 asserts 152 152 152 0 n/a 00:42:18.504 00:42:18.504 Elapsed time = 1.340 seconds 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.439 rmmod nvme_tcp 00:42:19.439 rmmod nvme_fabrics 00:42:19.439 rmmod nvme_keyring 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3193213 ']' 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3193213 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3193213 ']' 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3193213 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193213 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193213' 00:42:19.439 killing process with pid 3193213 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3193213 00:42:19.439 18:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3193213 00:42:20.813 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:20.814 18:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:22.745 00:42:22.745 real 0m9.305s 00:42:22.745 user 0m16.959s 00:42:22.745 sys 0m3.150s 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:22.745 ************************************ 00:42:22.745 END TEST nvmf_bdevio 00:42:22.745 ************************************ 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:22.745 00:42:22.745 real 4m28.679s 00:42:22.745 user 9m48.137s 00:42:22.745 sys 1m29.449s 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:22.745 18:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:22.745 ************************************ 00:42:22.745 END TEST nvmf_target_core_interrupt_mode 00:42:22.745 ************************************ 00:42:22.745 18:49:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:22.745 18:49:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:22.745 18:49:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:22.745 18:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:22.745 ************************************ 00:42:22.745 START TEST nvmf_interrupt 00:42:22.745 ************************************ 00:42:22.745 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:23.004 * Looking for test storage... 00:42:23.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.004 --rc genhtml_branch_coverage=1 00:42:23.004 --rc genhtml_function_coverage=1 00:42:23.004 --rc genhtml_legend=1 00:42:23.004 --rc geninfo_all_blocks=1 00:42:23.004 --rc geninfo_unexecuted_blocks=1 00:42:23.004 00:42:23.004 ' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.004 --rc genhtml_branch_coverage=1 00:42:23.004 --rc genhtml_function_coverage=1 00:42:23.004 --rc genhtml_legend=1 00:42:23.004 --rc geninfo_all_blocks=1 00:42:23.004 --rc geninfo_unexecuted_blocks=1 00:42:23.004 00:42:23.004 ' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.004 --rc genhtml_branch_coverage=1 00:42:23.004 --rc genhtml_function_coverage=1 00:42:23.004 --rc genhtml_legend=1 00:42:23.004 --rc geninfo_all_blocks=1 00:42:23.004 --rc geninfo_unexecuted_blocks=1 00:42:23.004 00:42:23.004 ' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:23.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.004 --rc genhtml_branch_coverage=1 00:42:23.004 --rc genhtml_function_coverage=1 00:42:23.004 --rc genhtml_legend=1 00:42:23.004 --rc geninfo_all_blocks=1 00:42:23.004 --rc geninfo_unexecuted_blocks=1 00:42:23.004 00:42:23.004 ' 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.004 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:23.005 18:49:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:24.906 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:25.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:25.166 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:25.166 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:25.166 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:25.166 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:25.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:25.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:42:25.167 00:42:25.167 --- 10.0.0.2 ping statistics --- 00:42:25.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.167 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:25.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:25.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:42:25.167 00:42:25.167 --- 10.0.0.1 ping statistics --- 00:42:25.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.167 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:25.167 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3195725 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3195725 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3195725 ']' 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:25.425 18:49:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:25.425 [2024-11-18 18:49:23.614040] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:25.425 [2024-11-18 18:49:23.616757] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:25.425 [2024-11-18 18:49:23.616861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:25.425 [2024-11-18 18:49:23.761071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:25.684 [2024-11-18 18:49:23.892211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:25.684 [2024-11-18 18:49:23.892291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:25.684 [2024-11-18 18:49:23.892327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:25.684 [2024-11-18 18:49:23.892349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:25.684 [2024-11-18 18:49:23.892379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:25.684 [2024-11-18 18:49:23.894915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.684 [2024-11-18 18:49:23.894932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.943 [2024-11-18 18:49:24.261379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:25.943 [2024-11-18 18:49:24.262130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:25.943 [2024-11-18 18:49:24.262484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:26.510 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:26.511 5000+0 records in 00:42:26.511 5000+0 records out 00:42:26.511 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0102118 s, 1.0 GB/s 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.511 AIO0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.511 [2024-11-18 18:49:24.644004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:26.511 [2024-11-18 18:49:24.672300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3195725 0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 0 idle 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195725 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.74 reactor_0' 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195725 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.74 reactor_0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:26.511 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3195725 1 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 1 idle 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:26.769 18:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195729 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.00 reactor_1' 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195729 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.00 reactor_1 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3195893 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:26.769 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3195725 0 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3195725 0 busy 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:26.770 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195725 root 20 0 20.1t 197376 101760 S 6.7 0.3 0:00.75 reactor_0' 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195725 root 20 0 20.1t 197376 101760 S 6.7 0.3 0:00.75 reactor_0 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:27.028 18:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:27.962 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:27.962 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:27.962 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:27.962 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195725 root 20 0 20.1t 210048 102144 R 99.9 0.3 0:03.06 reactor_0' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195725 root 20 0 20.1t 210048 102144 R 99.9 0.3 0:03.06 reactor_0 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3195725 1 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3195725 1 busy 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195729 root 20 0 20.1t 210048 102144 R 93.3 0.3 0:01.32 reactor_1' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195729 root 20 0 20.1t 210048 102144 R 93.3 0.3 0:01.32 reactor_1 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.220 18:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3195893 00:42:38.187 Initializing NVMe Controllers 00:42:38.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:38.187 Controller IO queue size 256, less than required. 00:42:38.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:38.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:38.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:38.187 Initialization complete. Launching workers. 00:42:38.187 ======================================================== 00:42:38.187 Latency(us) 00:42:38.187 Device Information : IOPS MiB/s Average min max 00:42:38.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10647.58 41.59 24064.82 6437.86 63312.40 00:42:38.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10644.38 41.58 24069.19 7197.53 28510.37 00:42:38.187 ======================================================== 00:42:38.187 Total : 21291.96 83.17 24067.00 6437.86 63312.40 00:42:38.187 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3195725 0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 0 idle 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195725 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:20.70 reactor_0' 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195725 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:20.70 reactor_0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:38.187 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3195725 1 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 1 idle 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195729 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:09.98 reactor_1' 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195729 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:09.98 reactor_1 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:38.188 18:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:40.089 18:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:40.089 18:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:40.089 18:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3195725 0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 0 idle 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195725 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:20.87 reactor_0' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195725 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:20.87 reactor_0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3195725 1 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3195725 1 idle 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3195725 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3195725 -w 256 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3195729 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:10.06 reactor_1' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3195729 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:10.06 reactor_1 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:40.089 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:40.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:40.348 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:40.348 rmmod nvme_tcp 00:42:40.606 rmmod nvme_fabrics 00:42:40.606 rmmod nvme_keyring 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3195725 ']' 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3195725 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3195725 ']' 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3195725 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195725 00:42:40.606 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:40.607 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:40.607 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195725' 00:42:40.607 killing process with pid 3195725 00:42:40.607 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3195725 00:42:40.607 18:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3195725 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:41.981 18:49:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:43.882 18:49:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:43.882 00:42:43.882 real 0m20.955s 00:42:43.882 user 0m39.403s 00:42:43.882 sys 0m6.412s 00:42:43.882 18:49:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:43.882 18:49:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:43.882 ************************************ 00:42:43.882 END TEST nvmf_interrupt 00:42:43.882 ************************************ 00:42:43.882 00:42:43.882 real 35m36.379s 00:42:43.882 user 93m26.575s 00:42:43.882 sys 7m48.629s 00:42:43.882 18:49:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:43.882 18:49:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:43.882 ************************************ 00:42:43.882 END TEST nvmf_tcp 00:42:43.882 ************************************ 00:42:43.882 18:49:42 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:43.882 18:49:42 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:43.882 18:49:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:43.882 18:49:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:43.882 18:49:42 -- common/autotest_common.sh@10 -- # set +x 00:42:43.882 ************************************ 00:42:43.882 START TEST spdkcli_nvmf_tcp 00:42:43.882 ************************************ 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:43.882 * Looking for test storage... 00:42:43.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:43.882 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.140 --rc genhtml_branch_coverage=1 00:42:44.140 --rc genhtml_function_coverage=1 00:42:44.140 --rc genhtml_legend=1 00:42:44.140 --rc geninfo_all_blocks=1 00:42:44.140 --rc geninfo_unexecuted_blocks=1 00:42:44.140 00:42:44.140 ' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.140 --rc genhtml_branch_coverage=1 00:42:44.140 --rc genhtml_function_coverage=1 00:42:44.140 --rc genhtml_legend=1 00:42:44.140 --rc geninfo_all_blocks=1 00:42:44.140 --rc geninfo_unexecuted_blocks=1 00:42:44.140 00:42:44.140 ' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.140 --rc genhtml_branch_coverage=1 00:42:44.140 --rc genhtml_function_coverage=1 00:42:44.140 --rc genhtml_legend=1 00:42:44.140 --rc geninfo_all_blocks=1 00:42:44.140 --rc geninfo_unexecuted_blocks=1 00:42:44.140 00:42:44.140 ' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:44.140 --rc genhtml_branch_coverage=1 00:42:44.140 --rc genhtml_function_coverage=1 00:42:44.140 --rc genhtml_legend=1 00:42:44.140 --rc geninfo_all_blocks=1 00:42:44.140 --rc geninfo_unexecuted_blocks=1 00:42:44.140 00:42:44.140 ' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:44.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3198028 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3198028 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3198028 ']' 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.140 18:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:44.140 [2024-11-18 18:49:42.346868] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:44.140 [2024-11-18 18:49:42.347069] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198028 ] 00:42:44.397 [2024-11-18 18:49:42.499625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:44.397 [2024-11-18 18:49:42.639022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.397 [2024-11-18 18:49:42.639024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:44.967 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.967 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:44.967 18:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:44.967 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:44.967 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.226 18:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:45.226 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:45.226 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:45.226 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:45.226 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:45.226 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:45.226 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:45.226 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:45.226 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:45.226 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:45.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:45.226 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:45.226 ' 00:42:48.510 [2024-11-18 18:49:46.167893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:49.443 [2024-11-18 18:49:47.453965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:51.971 [2024-11-18 18:49:49.837601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:53.870 [2024-11-18 18:49:51.884333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:55.246 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:55.246 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:55.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:55.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:55.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:55.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:55.246 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:55.246 18:49:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:55.246 18:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:55.246 18:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:55.247 18:49:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:55.247 18:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:55.247 18:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:55.247 18:49:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:55.247 18:49:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:55.813 18:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:55.813 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:55.813 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:55.813 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:55.813 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:55.813 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:55.813 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:55.813 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:55.813 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:55.813 ' 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:02.441 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:02.441 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:02.441 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:02.441 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:02.441 18:49:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:02.441 18:49:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:02.441 18:49:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3198028 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3198028 ']' 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3198028 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198028 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:02.441 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198028' 00:43:02.441 killing process with pid 3198028 00:43:02.442 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3198028 00:43:02.442 18:50:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3198028 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3198028 ']' 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3198028 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3198028 ']' 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3198028 00:43:03.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3198028) - No such process 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3198028 is not found' 00:43:03.009 Process with pid 3198028 is not found 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:03.009 00:43:03.009 real 0m19.033s 00:43:03.009 user 0m39.946s 00:43:03.009 sys 0m1.062s 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:03.009 18:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:03.009 ************************************ 00:43:03.009 END TEST spdkcli_nvmf_tcp 00:43:03.009 ************************************ 00:43:03.009 18:50:01 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:03.009 18:50:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:03.009 18:50:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:03.009 18:50:01 -- common/autotest_common.sh@10 -- # set +x 00:43:03.009 ************************************ 00:43:03.009 START TEST nvmf_identify_passthru 00:43:03.009 ************************************ 00:43:03.009 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:03.009 * Looking for test storage... 00:43:03.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:03.009 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:03.009 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:43:03.009 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:03.009 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:03.009 18:50:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:03.009 18:50:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:03.009 18:50:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:03.010 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:03.010 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.010 --rc genhtml_branch_coverage=1 00:43:03.010 --rc genhtml_function_coverage=1 00:43:03.010 --rc genhtml_legend=1 00:43:03.010 --rc geninfo_all_blocks=1 00:43:03.010 --rc geninfo_unexecuted_blocks=1 00:43:03.010 00:43:03.010 ' 00:43:03.010 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.010 --rc genhtml_branch_coverage=1 00:43:03.010 --rc genhtml_function_coverage=1 00:43:03.010 --rc genhtml_legend=1 00:43:03.010 --rc geninfo_all_blocks=1 00:43:03.010 --rc geninfo_unexecuted_blocks=1 00:43:03.010 00:43:03.010 ' 00:43:03.010 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.010 --rc genhtml_branch_coverage=1 00:43:03.010 --rc genhtml_function_coverage=1 00:43:03.010 --rc genhtml_legend=1 00:43:03.010 --rc geninfo_all_blocks=1 00:43:03.010 --rc geninfo_unexecuted_blocks=1 00:43:03.010 00:43:03.010 ' 00:43:03.010 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.010 --rc genhtml_branch_coverage=1 00:43:03.010 --rc genhtml_function_coverage=1 00:43:03.010 --rc genhtml_legend=1 00:43:03.010 --rc geninfo_all_blocks=1 00:43:03.010 --rc geninfo_unexecuted_blocks=1 00:43:03.010 00:43:03.010 ' 00:43:03.010 18:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:03.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:03.010 18:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.010 18:50:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:03.010 18:50:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.010 18:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:03.010 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:03.011 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:03.011 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:03.011 18:50:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:03.011 18:50:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:05.541 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:05.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:05.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:05.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:05.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:05.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:43:05.542 00:43:05.542 --- 10.0.0.2 ping statistics --- 00:43:05.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.542 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:05.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:43:05.542 00:43:05.542 --- 10.0.0.1 ping statistics --- 00:43:05.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.542 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:05.542 18:50:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:43:05.542 18:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:05.542 18:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:09.727 18:50:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:09.727 18:50:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:09.727 18:50:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:09.727 18:50:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:14.993 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:14.993 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:14.993 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:14.993 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.993 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:14.993 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:14.993 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.993 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3203046 00:43:14.994 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:14.994 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:14.994 18:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3203046 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3203046 ']' 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:14.994 18:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.994 [2024-11-18 18:50:12.473576] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:43:14.994 [2024-11-18 18:50:12.473739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:14.994 [2024-11-18 18:50:12.613337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:14.994 [2024-11-18 18:50:12.736897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:14.994 [2024-11-18 18:50:12.736987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:14.994 [2024-11-18 18:50:12.737009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:14.994 [2024-11-18 18:50:12.737030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:14.994 [2024-11-18 18:50:12.737046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:14.994 [2024-11-18 18:50:12.739671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.994 [2024-11-18 18:50:12.739731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:14.994 [2024-11-18 18:50:12.739777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.994 [2024-11-18 18:50:12.739784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:15.252 18:50:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.252 INFO: Log level set to 20 00:43:15.252 INFO: Requests: 00:43:15.252 { 00:43:15.252 "jsonrpc": "2.0", 00:43:15.252 "method": "nvmf_set_config", 00:43:15.252 "id": 1, 00:43:15.252 "params": { 00:43:15.252 "admin_cmd_passthru": { 00:43:15.252 "identify_ctrlr": true 00:43:15.252 } 00:43:15.252 } 00:43:15.252 } 00:43:15.252 00:43:15.252 INFO: response: 00:43:15.252 { 00:43:15.252 "jsonrpc": "2.0", 00:43:15.252 "id": 1, 00:43:15.252 "result": true 00:43:15.252 } 00:43:15.252 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.252 18:50:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.252 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.252 INFO: Setting log level to 20 00:43:15.252 INFO: Setting log level to 20 00:43:15.252 INFO: Log level set to 20 00:43:15.252 INFO: Log level set to 20 00:43:15.252 INFO: Requests: 00:43:15.252 { 00:43:15.252 "jsonrpc": "2.0", 00:43:15.252 "method": "framework_start_init", 00:43:15.252 "id": 1 00:43:15.252 } 00:43:15.252 00:43:15.252 INFO: Requests: 00:43:15.252 { 00:43:15.252 "jsonrpc": "2.0", 00:43:15.252 "method": "framework_start_init", 00:43:15.252 "id": 1 00:43:15.252 } 00:43:15.252 00:43:15.511 [2024-11-18 18:50:13.818186] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:15.511 INFO: response: 00:43:15.511 { 00:43:15.511 "jsonrpc": "2.0", 00:43:15.511 "id": 1, 00:43:15.511 "result": true 00:43:15.511 } 00:43:15.511 00:43:15.511 INFO: response: 00:43:15.511 { 00:43:15.511 "jsonrpc": "2.0", 00:43:15.511 "id": 1, 00:43:15.511 "result": true 00:43:15.511 } 00:43:15.511 00:43:15.511 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.511 18:50:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:15.511 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.511 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.511 INFO: Setting log level to 40 00:43:15.511 INFO: Setting log level to 40 00:43:15.511 INFO: Setting log level to 40 00:43:15.511 [2024-11-18 18:50:13.831109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:15.769 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.769 18:50:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:15.769 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:15.769 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:15.769 18:50:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:15.769 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.769 18:50:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.046 Nvme0n1 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.046 [2024-11-18 18:50:16.797133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.046 [ 00:43:19.046 { 00:43:19.046 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:19.046 "subtype": "Discovery", 00:43:19.046 "listen_addresses": [], 00:43:19.046 "allow_any_host": true, 00:43:19.046 "hosts": [] 00:43:19.046 }, 00:43:19.046 { 00:43:19.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:19.046 "subtype": "NVMe", 00:43:19.046 "listen_addresses": [ 00:43:19.046 { 00:43:19.046 "trtype": "TCP", 00:43:19.046 "adrfam": "IPv4", 00:43:19.046 "traddr": "10.0.0.2", 00:43:19.046 "trsvcid": "4420" 00:43:19.046 } 00:43:19.046 ], 00:43:19.046 "allow_any_host": true, 00:43:19.046 "hosts": [], 00:43:19.046 "serial_number": "SPDK00000000000001", 00:43:19.046 "model_number": "SPDK bdev Controller", 00:43:19.046 "max_namespaces": 1, 00:43:19.046 "min_cntlid": 1, 00:43:19.046 "max_cntlid": 65519, 00:43:19.046 "namespaces": [ 00:43:19.046 { 00:43:19.046 "nsid": 1, 00:43:19.046 "bdev_name": "Nvme0n1", 00:43:19.046 "name": "Nvme0n1", 00:43:19.046 "nguid": "93D7B879F4CE43C9B12D323B30DD5EB9", 00:43:19.046 "uuid": "93d7b879-f4ce-43c9-b12d-323b30dd5eb9" 00:43:19.046 } 00:43:19.046 ] 00:43:19.046 } 00:43:19.046 ] 00:43:19.046 18:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:19.046 18:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:19.046 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:19.046 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:19.046 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:19.046 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:19.304 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.304 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:19.304 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:19.304 18:50:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:19.304 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:19.304 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:19.304 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:19.304 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:19.304 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:19.305 rmmod nvme_tcp 00:43:19.305 rmmod nvme_fabrics 00:43:19.305 rmmod nvme_keyring 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3203046 ']' 00:43:19.305 18:50:17 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3203046 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3203046 ']' 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3203046 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3203046 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3203046' 00:43:19.305 killing process with pid 3203046 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3203046 00:43:19.305 18:50:17 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3203046 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:21.830 18:50:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.830 18:50:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:21.830 18:50:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:23.730 18:50:21 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:23.730 00:43:23.730 real 0m20.841s 00:43:23.730 user 0m33.902s 00:43:23.730 sys 0m3.598s 00:43:23.730 18:50:21 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:23.730 18:50:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:23.730 ************************************ 00:43:23.730 END TEST nvmf_identify_passthru 00:43:23.730 ************************************ 00:43:23.730 18:50:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:23.730 18:50:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:23.730 18:50:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:23.730 18:50:22 -- common/autotest_common.sh@10 -- # set +x 00:43:23.730 ************************************ 00:43:23.730 START TEST nvmf_dif 00:43:23.730 ************************************ 00:43:23.730 18:50:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:23.989 * Looking for test storage... 00:43:23.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.989 --rc genhtml_branch_coverage=1 00:43:23.989 --rc genhtml_function_coverage=1 00:43:23.989 --rc genhtml_legend=1 00:43:23.989 --rc geninfo_all_blocks=1 00:43:23.989 --rc geninfo_unexecuted_blocks=1 00:43:23.989 00:43:23.989 ' 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.989 --rc genhtml_branch_coverage=1 00:43:23.989 --rc genhtml_function_coverage=1 00:43:23.989 --rc genhtml_legend=1 00:43:23.989 --rc geninfo_all_blocks=1 00:43:23.989 --rc geninfo_unexecuted_blocks=1 00:43:23.989 00:43:23.989 ' 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.989 --rc genhtml_branch_coverage=1 00:43:23.989 --rc genhtml_function_coverage=1 00:43:23.989 --rc genhtml_legend=1 00:43:23.989 --rc geninfo_all_blocks=1 00:43:23.989 --rc geninfo_unexecuted_blocks=1 00:43:23.989 00:43:23.989 ' 00:43:23.989 18:50:22 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.989 --rc genhtml_branch_coverage=1 00:43:23.989 --rc genhtml_function_coverage=1 00:43:23.989 --rc genhtml_legend=1 00:43:23.989 --rc geninfo_all_blocks=1 00:43:23.989 --rc geninfo_unexecuted_blocks=1 00:43:23.989 00:43:23.989 ' 00:43:23.989 18:50:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:23.989 18:50:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:23.989 18:50:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:23.989 18:50:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.989 18:50:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.990 18:50:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.990 18:50:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:23.990 18:50:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:23.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:23.990 18:50:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:23.990 18:50:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:23.990 18:50:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:23.990 18:50:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:23.990 18:50:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:23.990 18:50:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:23.990 18:50:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:23.990 18:50:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:23.990 18:50:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:25.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:25.975 18:50:24 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:25.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:25.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:25.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:25.976 18:50:24 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:26.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:26.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:43:26.234 00:43:26.234 --- 10.0.0.2 ping statistics --- 00:43:26.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.234 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:26.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:26.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:43:26.234 00:43:26.234 --- 10.0.0.1 ping statistics --- 00:43:26.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.234 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:26.234 18:50:24 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:27.608 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:27.608 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:27.608 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:27.608 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:27.608 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:27.608 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:27.608 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:27.608 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:27.608 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:27.608 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:27.608 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:27.608 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:27.608 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:27.608 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:27.608 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:27.608 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:27.608 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:27.608 18:50:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:27.608 18:50:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3206503 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:27.608 18:50:25 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3206503 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3206503 ']' 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:27.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:27.608 18:50:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:27.608 [2024-11-18 18:50:25.874718] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:43:27.608 [2024-11-18 18:50:25.874865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:27.866 [2024-11-18 18:50:26.018658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:27.866 [2024-11-18 18:50:26.149409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:27.866 [2024-11-18 18:50:26.149506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:27.866 [2024-11-18 18:50:26.149531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:27.866 [2024-11-18 18:50:26.149555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:27.866 [2024-11-18 18:50:26.149575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:27.866 [2024-11-18 18:50:26.151255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:28.799 18:50:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 18:50:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:28.799 18:50:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:28.799 18:50:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 [2024-11-18 18:50:26.907253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.799 18:50:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 ************************************ 00:43:28.799 START TEST fio_dif_1_default 00:43:28.799 ************************************ 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 bdev_null0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:28.799 [2024-11-18 18:50:26.967571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:28.799 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:28.799 { 00:43:28.799 "params": { 00:43:28.799 "name": "Nvme$subsystem", 00:43:28.799 "trtype": "$TEST_TRANSPORT", 00:43:28.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:28.799 "adrfam": "ipv4", 00:43:28.799 "trsvcid": "$NVMF_PORT", 00:43:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:28.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:28.799 "hdgst": ${hdgst:-false}, 00:43:28.799 "ddgst": ${ddgst:-false} 00:43:28.799 }, 00:43:28.799 "method": "bdev_nvme_attach_controller" 00:43:28.799 } 00:43:28.799 EOF 00:43:28.799 )") 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:28.800 "params": { 00:43:28.800 "name": "Nvme0", 00:43:28.800 "trtype": "tcp", 00:43:28.800 "traddr": "10.0.0.2", 00:43:28.800 "adrfam": "ipv4", 00:43:28.800 "trsvcid": "4420", 00:43:28.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:28.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:28.800 "hdgst": false, 00:43:28.800 "ddgst": false 00:43:28.800 }, 00:43:28.800 "method": "bdev_nvme_attach_controller" 00:43:28.800 }' 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:28.800 18:50:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:29.058 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:29.058 fio-3.35 00:43:29.058 Starting 1 thread 00:43:41.250 00:43:41.250 filename0: (groupid=0, jobs=1): err= 0: pid=3206925: Mon Nov 18 18:50:38 2024 00:43:41.250 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:43:41.250 slat (nsec): min=6189, max=43901, avg=13104.42, stdev=4566.23 00:43:41.250 clat (usec): min=40840, max=43255, avg=40973.35, stdev=160.06 00:43:41.250 lat (usec): min=40851, max=43285, avg=40986.45, stdev=160.59 00:43:41.250 clat percentiles (usec): 00:43:41.250 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:41.250 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:41.250 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:41.250 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:43:41.250 | 99.99th=[43254] 00:43:41.250 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:43:41.250 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:41.250 lat (msec) : 50=100.00% 00:43:41.250 cpu : usr=92.95%, sys=6.58%, ctx=17, majf=0, minf=1636 00:43:41.250 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.250 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.250 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:41.250 00:43:41.250 Run status group 0 (all jobs): 00:43:41.250 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:43:41.250 ----------------------------------------------------- 00:43:41.250 Suppressions used: 00:43:41.250 count bytes template 00:43:41.250 1 8 /usr/src/fio/parse.c 00:43:41.250 1 8 libtcmalloc_minimal.so 00:43:41.250 1 904 libcrypto.so 00:43:41.250 ----------------------------------------------------- 00:43:41.250 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.250 00:43:41.250 real 0m12.471s 00:43:41.250 user 0m11.666s 00:43:41.250 sys 0m1.130s 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:41.250 18:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:41.250 ************************************ 00:43:41.250 END TEST fio_dif_1_default 00:43:41.250 ************************************ 00:43:41.250 18:50:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:41.250 18:50:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:41.250 18:50:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:41.250 18:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 ************************************ 00:43:41.251 START TEST fio_dif_1_multi_subsystems 00:43:41.251 ************************************ 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 bdev_null0 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 [2024-11-18 18:50:39.488217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 bdev_null1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:41.251 { 00:43:41.251 "params": { 00:43:41.251 "name": "Nvme$subsystem", 00:43:41.251 "trtype": "$TEST_TRANSPORT", 00:43:41.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:41.251 "adrfam": "ipv4", 00:43:41.251 "trsvcid": "$NVMF_PORT", 00:43:41.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:41.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:41.251 "hdgst": ${hdgst:-false}, 00:43:41.251 "ddgst": ${ddgst:-false} 00:43:41.251 }, 00:43:41.251 "method": "bdev_nvme_attach_controller" 00:43:41.251 } 00:43:41.251 EOF 00:43:41.251 )") 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:41.251 { 00:43:41.251 "params": { 00:43:41.251 "name": "Nvme$subsystem", 00:43:41.251 "trtype": "$TEST_TRANSPORT", 00:43:41.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:41.251 "adrfam": "ipv4", 00:43:41.251 "trsvcid": "$NVMF_PORT", 00:43:41.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:41.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:41.251 "hdgst": ${hdgst:-false}, 00:43:41.251 "ddgst": ${ddgst:-false} 00:43:41.251 }, 00:43:41.251 "method": "bdev_nvme_attach_controller" 00:43:41.251 } 00:43:41.251 EOF 00:43:41.251 )") 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:41.251 "params": { 00:43:41.251 "name": "Nvme0", 00:43:41.251 "trtype": "tcp", 00:43:41.251 "traddr": "10.0.0.2", 00:43:41.251 "adrfam": "ipv4", 00:43:41.251 "trsvcid": "4420", 00:43:41.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:41.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:41.251 "hdgst": false, 00:43:41.251 "ddgst": false 00:43:41.251 }, 00:43:41.251 "method": "bdev_nvme_attach_controller" 00:43:41.251 },{ 00:43:41.251 "params": { 00:43:41.251 "name": "Nvme1", 00:43:41.251 "trtype": "tcp", 00:43:41.251 "traddr": "10.0.0.2", 00:43:41.251 "adrfam": "ipv4", 00:43:41.251 "trsvcid": "4420", 00:43:41.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:41.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:41.251 "hdgst": false, 00:43:41.251 "ddgst": false 00:43:41.251 }, 00:43:41.251 "method": "bdev_nvme_attach_controller" 00:43:41.251 }' 00:43:41.251 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:41.252 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:41.252 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:41.252 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:41.252 18:50:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:41.509 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:41.509 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:41.509 fio-3.35 00:43:41.509 Starting 2 threads 00:43:53.705 00:43:53.705 filename0: (groupid=0, jobs=1): err= 0: pid=3208454: Mon Nov 18 18:50:51 2024 00:43:53.705 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10009msec) 00:43:53.705 slat (nsec): min=5677, max=33521, avg=13667.60, stdev=4125.59 00:43:53.705 clat (usec): min=40839, max=42566, avg=41145.04, stdev=398.66 00:43:53.705 lat (usec): min=40850, max=42600, avg=41158.71, stdev=398.92 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:53.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:43:53.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:43:53.705 | 99.99th=[42730] 00:43:53.705 bw ( KiB/s): min= 384, max= 416, per=40.28%, avg=387.20, stdev= 9.85, samples=20 00:43:53.705 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:43:53.705 lat (msec) : 50=100.00% 00:43:53.705 cpu : usr=94.71%, sys=4.78%, ctx=14, majf=0, minf=1636 00:43:53.705 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.705 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:53.705 filename1: (groupid=0, jobs=1): err= 0: pid=3208455: Mon Nov 18 18:50:51 2024 00:43:53.705 read: IOPS=143, BW=572KiB/s (586kB/s)(5728KiB/10008msec) 00:43:53.705 slat (nsec): min=5683, max=74440, avg=14366.84, stdev=5759.93 00:43:53.705 clat (usec): min=664, max=42242, avg=27908.12, stdev=18914.45 00:43:53.705 lat (usec): min=674, max=42261, avg=27922.48, stdev=18914.76 00:43:53.705 clat percentiles (usec): 00:43:53.705 | 1.00th=[ 701], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 775], 00:43:53.705 | 30.00th=[ 816], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:53.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:53.705 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:53.705 | 99.99th=[42206] 00:43:53.705 bw ( KiB/s): min= 384, max= 768, per=59.43%, avg=571.20, stdev=185.22, samples=20 00:43:53.705 iops : min= 96, max= 192, avg=142.80, stdev=46.31, samples=20 00:43:53.705 lat (usec) : 750=15.57%, 1000=16.55% 00:43:53.705 lat (msec) : 2=0.28%, 4=0.28%, 50=67.32% 00:43:53.705 cpu : usr=94.46%, sys=5.03%, ctx=13, majf=0, minf=1636 00:43:53.705 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.705 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.705 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:53.705 00:43:53.705 Run status group 0 (all jobs): 00:43:53.705 READ: bw=961KiB/s (984kB/s), 388KiB/s-572KiB/s (398kB/s-586kB/s), io=9616KiB (9847kB), run=10008-10009msec 00:43:53.963 ----------------------------------------------------- 00:43:53.963 Suppressions used: 00:43:53.963 count bytes template 00:43:53.963 2 16 /usr/src/fio/parse.c 00:43:53.963 1 8 libtcmalloc_minimal.so 00:43:53.963 1 904 libcrypto.so 00:43:53.963 ----------------------------------------------------- 00:43:53.963 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:53.963 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 00:43:53.964 real 0m12.777s 00:43:53.964 user 0m21.481s 00:43:53.964 sys 0m1.494s 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 ************************************ 00:43:53.964 END TEST fio_dif_1_multi_subsystems 00:43:53.964 ************************************ 00:43:53.964 18:50:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:53.964 18:50:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:53.964 18:50:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 ************************************ 00:43:53.964 START TEST fio_dif_rand_params 00:43:53.964 ************************************ 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 bdev_null0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.964 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:54.222 [2024-11-18 18:50:52.311372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:54.222 { 00:43:54.222 "params": { 00:43:54.222 "name": "Nvme$subsystem", 00:43:54.222 "trtype": "$TEST_TRANSPORT", 00:43:54.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:54.222 "adrfam": "ipv4", 00:43:54.222 "trsvcid": "$NVMF_PORT", 00:43:54.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:54.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:54.222 "hdgst": ${hdgst:-false}, 00:43:54.222 "ddgst": ${ddgst:-false} 00:43:54.222 }, 00:43:54.222 "method": "bdev_nvme_attach_controller" 00:43:54.222 } 00:43:54.222 EOF 00:43:54.222 )") 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:54.222 "params": { 00:43:54.222 "name": "Nvme0", 00:43:54.222 "trtype": "tcp", 00:43:54.222 "traddr": "10.0.0.2", 00:43:54.222 "adrfam": "ipv4", 00:43:54.222 "trsvcid": "4420", 00:43:54.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:54.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:54.222 "hdgst": false, 00:43:54.222 "ddgst": false 00:43:54.222 }, 00:43:54.222 "method": "bdev_nvme_attach_controller" 00:43:54.222 }' 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:54.222 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:54.223 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:54.223 18:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:54.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:54.481 ... 00:43:54.481 fio-3.35 00:43:54.481 Starting 3 threads 00:44:01.034 00:44:01.034 filename0: (groupid=0, jobs=1): err= 0: pid=3209970: Mon Nov 18 18:50:58 2024 00:44:01.034 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(141MiB/5008msec) 00:44:01.034 slat (nsec): min=5992, max=41032, avg=19794.67, stdev=2940.11 00:44:01.034 clat (usec): min=8057, max=55299, avg=13340.17, stdev=4707.05 00:44:01.034 lat (usec): min=8076, max=55319, avg=13359.97, stdev=4707.01 00:44:01.034 clat percentiles (usec): 00:44:01.034 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:44:01.034 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:44:01.034 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14484], 95.00th=[15139], 00:44:01.034 | 99.00th=[51643], 99.50th=[53216], 99.90th=[54789], 99.95th=[55313], 00:44:01.034 | 99.99th=[55313] 00:44:01.034 bw ( KiB/s): min=24576, max=31232, per=38.47%, avg=28697.60, stdev=1951.32, samples=10 00:44:01.034 iops : min= 192, max= 244, avg=224.20, stdev=15.24, samples=10 00:44:01.034 lat (msec) : 10=1.51%, 20=97.15%, 50=0.27%, 100=1.07% 00:44:01.034 cpu : usr=93.53%, sys=5.91%, ctx=7, majf=0, minf=1636 00:44:01.034 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:01.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:01.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:01.034 filename0: (groupid=0, jobs=1): err= 0: pid=3209971: Mon Nov 18 18:50:58 2024 00:44:01.034 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(121MiB/5050msec) 00:44:01.034 slat (nsec): min=10869, max=56512, avg=20796.21, stdev=2953.93 00:44:01.034 clat (usec): min=8099, max=58681, avg=15564.52, stdev=3685.13 00:44:01.034 lat (usec): min=8119, max=58698, avg=15585.32, stdev=3684.96 00:44:01.034 clat percentiles (usec): 00:44:01.034 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[12649], 20.00th=[13566], 00:44:01.034 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15795], 60.00th=[16450], 00:44:01.034 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:44:01.034 | 99.00th=[19268], 99.50th=[50070], 99.90th=[58459], 99.95th=[58459], 00:44:01.034 | 99.99th=[58459] 00:44:01.034 bw ( KiB/s): min=22784, max=26880, per=33.16%, avg=24734.60, stdev=1612.28, samples=10 00:44:01.034 iops : min= 178, max= 210, avg=193.20, stdev=12.59, samples=10 00:44:01.034 lat (msec) : 10=3.92%, 20=95.56%, 50=0.10%, 100=0.41% 00:44:01.034 cpu : usr=83.58%, sys=9.90%, ctx=262, majf=0, minf=1634 00:44:01.034 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:01.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 issued rwts: total=969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:01.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:01.034 filename0: (groupid=0, jobs=1): err= 0: pid=3209972: Mon Nov 18 18:50:58 2024 00:44:01.034 read: IOPS=168, BW=21.1MiB/s (22.1MB/s)(106MiB/5046msec) 00:44:01.034 slat (nsec): min=6140, max=38163, avg=19366.76, stdev=2080.23 00:44:01.034 clat (usec): min=8783, max=56809, avg=17736.95, stdev=4492.42 00:44:01.034 lat (usec): min=8802, max=56830, avg=17756.32, stdev=4492.33 00:44:01.034 clat percentiles (usec): 00:44:01.034 | 1.00th=[10421], 5.00th=[13435], 10.00th=[15008], 20.00th=[15926], 00:44:01.034 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:44:01.034 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:44:01.034 | 99.00th=[47973], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:44:01.034 | 99.99th=[56886] 00:44:01.034 bw ( KiB/s): min=17920, max=24064, per=29.07%, avg=21683.20, stdev=1641.64, samples=10 00:44:01.034 iops : min= 140, max= 188, avg=169.40, stdev=12.83, samples=10 00:44:01.034 lat (msec) : 10=0.35%, 20=93.53%, 50=5.29%, 100=0.82% 00:44:01.034 cpu : usr=93.86%, sys=5.55%, ctx=13, majf=0, minf=1635 00:44:01.034 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:01.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.034 issued rwts: total=850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:01.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:01.034 00:44:01.034 Run status group 0 (all jobs): 00:44:01.034 READ: bw=72.8MiB/s (76.4MB/s), 21.1MiB/s-28.1MiB/s (22.1MB/s-29.4MB/s), io=368MiB (386MB), run=5008-5050msec 00:44:01.601 ----------------------------------------------------- 00:44:01.601 Suppressions used: 00:44:01.601 count bytes template 00:44:01.601 5 44 /usr/src/fio/parse.c 00:44:01.601 1 8 libtcmalloc_minimal.so 00:44:01.601 1 904 libcrypto.so 00:44:01.601 ----------------------------------------------------- 00:44:01.601 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 bdev_null0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 [2024-11-18 18:50:59.854871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 bdev_null1 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 bdev_null2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:01.601 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:01.601 { 00:44:01.601 "params": { 00:44:01.601 "name": "Nvme$subsystem", 00:44:01.602 "trtype": "$TEST_TRANSPORT", 00:44:01.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:01.602 "adrfam": "ipv4", 00:44:01.602 "trsvcid": "$NVMF_PORT", 00:44:01.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:01.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:01.602 "hdgst": ${hdgst:-false}, 00:44:01.602 "ddgst": ${ddgst:-false} 00:44:01.602 }, 00:44:01.602 "method": "bdev_nvme_attach_controller" 00:44:01.602 } 00:44:01.602 EOF 00:44:01.602 )") 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:01.602 { 00:44:01.602 "params": { 00:44:01.602 "name": "Nvme$subsystem", 00:44:01.602 "trtype": "$TEST_TRANSPORT", 00:44:01.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:01.602 "adrfam": "ipv4", 00:44:01.602 "trsvcid": "$NVMF_PORT", 00:44:01.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:01.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:01.602 "hdgst": ${hdgst:-false}, 00:44:01.602 "ddgst": ${ddgst:-false} 00:44:01.602 }, 00:44:01.602 "method": "bdev_nvme_attach_controller" 00:44:01.602 } 00:44:01.602 EOF 00:44:01.602 )") 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:01.602 { 00:44:01.602 "params": { 00:44:01.602 "name": "Nvme$subsystem", 00:44:01.602 "trtype": "$TEST_TRANSPORT", 00:44:01.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:01.602 "adrfam": "ipv4", 00:44:01.602 "trsvcid": "$NVMF_PORT", 00:44:01.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:01.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:01.602 "hdgst": ${hdgst:-false}, 00:44:01.602 "ddgst": ${ddgst:-false} 00:44:01.602 }, 00:44:01.602 "method": "bdev_nvme_attach_controller" 00:44:01.602 } 00:44:01.602 EOF 00:44:01.602 )") 00:44:01.602 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:01.860 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:01.860 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:01.860 18:50:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:01.860 "params": { 00:44:01.860 "name": "Nvme0", 00:44:01.860 "trtype": "tcp", 00:44:01.860 "traddr": "10.0.0.2", 00:44:01.860 "adrfam": "ipv4", 00:44:01.860 "trsvcid": "4420", 00:44:01.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:01.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:01.861 "hdgst": false, 00:44:01.861 "ddgst": false 00:44:01.861 }, 00:44:01.861 "method": "bdev_nvme_attach_controller" 00:44:01.861 },{ 00:44:01.861 "params": { 00:44:01.861 "name": "Nvme1", 00:44:01.861 "trtype": "tcp", 00:44:01.861 "traddr": "10.0.0.2", 00:44:01.861 "adrfam": "ipv4", 00:44:01.861 "trsvcid": "4420", 00:44:01.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:01.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:01.861 "hdgst": false, 00:44:01.861 "ddgst": false 00:44:01.861 }, 00:44:01.861 "method": "bdev_nvme_attach_controller" 00:44:01.861 },{ 00:44:01.861 "params": { 00:44:01.861 "name": "Nvme2", 00:44:01.861 "trtype": "tcp", 00:44:01.861 "traddr": "10.0.0.2", 00:44:01.861 "adrfam": "ipv4", 00:44:01.861 "trsvcid": "4420", 00:44:01.861 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:01.861 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:01.861 "hdgst": false, 00:44:01.861 "ddgst": false 00:44:01.861 }, 00:44:01.861 "method": "bdev_nvme_attach_controller" 00:44:01.861 }' 00:44:01.861 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:01.861 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:01.861 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:01.861 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:01.861 18:50:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:02.119 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:02.119 ... 00:44:02.119 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:02.119 ... 00:44:02.119 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:02.119 ... 00:44:02.119 fio-3.35 00:44:02.119 Starting 24 threads 00:44:14.319 00:44:14.319 filename0: (groupid=0, jobs=1): err= 0: pid=3211059: Mon Nov 18 18:51:11 2024 00:44:14.319 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10015msec) 00:44:14.319 slat (nsec): min=7444, max=76593, avg=31073.63, stdev=11697.78 00:44:14.319 clat (usec): min=15345, max=99253, avg=45058.40, stdev=4208.48 00:44:14.319 lat (usec): min=15399, max=99277, avg=45089.47, stdev=4207.23 00:44:14.319 clat percentiles (usec): 00:44:14.319 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.319 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:14.319 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.319 | 99.00th=[61080], 99.50th=[67634], 99.90th=[84411], 99.95th=[99091], 00:44:14.319 | 99.99th=[99091] 00:44:14.319 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.11, stdev=73.90, samples=19 00:44:14.319 iops : min= 320, max= 384, avg=352.00, stdev=18.52, samples=19 00:44:14.319 lat (msec) : 20=0.06%, 50=96.80%, 100=3.14% 00:44:14.319 cpu : usr=98.27%, sys=1.21%, ctx=16, majf=0, minf=1634 00:44:14.319 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.319 filename0: (groupid=0, jobs=1): err= 0: pid=3211060: Mon Nov 18 18:51:11 2024 00:44:14.319 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10028msec) 00:44:14.319 slat (nsec): min=12427, max=98175, avg=42487.62, stdev=13877.22 00:44:14.319 clat (msec): min=28, max=113, avg=44.98, stdev= 4.68 00:44:14.319 lat (msec): min=28, max=113, avg=45.02, stdev= 4.67 00:44:14.319 clat percentiles (msec): 00:44:14.319 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:44:14.319 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.319 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.319 | 99.00th=[ 58], 99.50th=[ 63], 99.90th=[ 103], 99.95th=[ 114], 00:44:14.319 | 99.99th=[ 114] 00:44:14.319 bw ( KiB/s): min= 1248, max= 1536, per=4.13%, avg=1408.00, stdev=77.65, samples=19 00:44:14.319 iops : min= 312, max= 384, avg=351.89, stdev=19.37, samples=19 00:44:14.319 lat (msec) : 50=96.86%, 100=2.69%, 250=0.45% 00:44:14.319 cpu : usr=98.19%, sys=1.30%, ctx=18, majf=0, minf=1633 00:44:14.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.319 filename0: (groupid=0, jobs=1): err= 0: pid=3211061: Mon Nov 18 18:51:11 2024 00:44:14.319 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10021msec) 00:44:14.319 slat (nsec): min=10685, max=94355, avg=46932.57, stdev=14742.06 00:44:14.319 clat (msec): min=23, max=102, avg=44.91, stdev= 4.95 00:44:14.319 lat (msec): min=23, max=102, avg=44.96, stdev= 4.95 00:44:14.319 clat percentiles (msec): 00:44:14.319 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:44:14.319 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.319 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.319 | 99.00th=[ 62], 99.50th=[ 82], 99.90th=[ 104], 99.95th=[ 104], 00:44:14.319 | 99.99th=[ 104] 00:44:14.319 bw ( KiB/s): min= 1154, max= 1536, per=4.13%, avg=1408.11, stdev=95.11, samples=19 00:44:14.319 iops : min= 288, max= 384, avg=352.00, stdev=23.85, samples=19 00:44:14.319 lat (msec) : 50=97.17%, 100=2.38%, 250=0.45% 00:44:14.319 cpu : usr=98.14%, sys=1.32%, ctx=13, majf=0, minf=1635 00:44:14.319 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.319 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.319 filename0: (groupid=0, jobs=1): err= 0: pid=3211062: Mon Nov 18 18:51:11 2024 00:44:14.319 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10025msec) 00:44:14.319 slat (usec): min=13, max=108, avg=46.95, stdev=20.23 00:44:14.319 clat (usec): min=29366, max=97175, avg=44930.74, stdev=4239.21 00:44:14.319 lat (usec): min=29422, max=97206, avg=44977.69, stdev=4233.31 00:44:14.319 clat percentiles (usec): 00:44:14.319 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:44:14.319 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.319 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47973], 00:44:14.319 | 99.00th=[54264], 99.50th=[62129], 99.90th=[96994], 99.95th=[96994], 00:44:14.319 | 99.99th=[96994] 00:44:14.319 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=73.90, samples=19 00:44:14.319 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.319 lat (msec) : 50=97.29%, 100=2.71% 00:44:14.320 cpu : usr=98.00%, sys=1.46%, ctx=20, majf=0, minf=1633 00:44:14.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename0: (groupid=0, jobs=1): err= 0: pid=3211063: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=364, BW=1458KiB/s (1493kB/s)(14.3MiB/10026msec) 00:44:14.320 slat (nsec): min=8432, max=79314, avg=33539.64, stdev=12758.04 00:44:14.320 clat (usec): min=3722, max=66265, avg=43615.63, stdev=6274.46 00:44:14.320 lat (usec): min=3749, max=66299, avg=43649.17, stdev=6273.27 00:44:14.320 clat percentiles (usec): 00:44:14.320 | 1.00th=[13042], 5.00th=[31589], 10.00th=[42730], 20.00th=[43254], 00:44:14.320 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.320 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[47449], 00:44:14.320 | 99.00th=[56886], 99.50th=[61080], 99.90th=[61604], 99.95th=[66323], 00:44:14.320 | 99.99th=[66323] 00:44:14.320 bw ( KiB/s): min= 1280, max= 2224, per=4.26%, avg=1455.20, stdev=192.38, samples=20 00:44:14.320 iops : min= 320, max= 556, avg=363.80, stdev=48.10, samples=20 00:44:14.320 lat (msec) : 4=0.05%, 10=0.82%, 20=1.29%, 50=95.35%, 100=2.49% 00:44:14.320 cpu : usr=97.98%, sys=1.51%, ctx=16, majf=0, minf=1632 00:44:14.320 IO depths : 1=5.8%, 2=11.7%, 4=24.0%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename0: (groupid=0, jobs=1): err= 0: pid=3211064: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=355, BW=1422KiB/s (1456kB/s)(13.9MiB/10037msec) 00:44:14.320 slat (nsec): min=7497, max=80065, avg=34997.40, stdev=12627.44 00:44:14.320 clat (usec): min=20143, max=62387, avg=44702.20, stdev=3046.91 00:44:14.320 lat (usec): min=20157, max=62430, avg=44737.20, stdev=3049.54 00:44:14.320 clat percentiles (usec): 00:44:14.320 | 1.00th=[34866], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.320 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.320 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.320 | 99.00th=[55837], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:44:14.320 | 99.99th=[62129] 00:44:14.320 bw ( KiB/s): min= 1280, max= 1536, per=4.15%, avg=1418.35, stdev=72.02, samples=20 00:44:14.320 iops : min= 320, max= 384, avg=354.55, stdev=18.04, samples=20 00:44:14.320 lat (msec) : 50=97.17%, 100=2.83% 00:44:14.320 cpu : usr=98.22%, sys=1.31%, ctx=18, majf=0, minf=1634 00:44:14.320 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename0: (groupid=0, jobs=1): err= 0: pid=3211065: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10018msec) 00:44:14.320 slat (usec): min=12, max=117, avg=37.10, stdev=10.03 00:44:14.320 clat (usec): min=28180, max=92547, avg=44968.15, stdev=4040.36 00:44:14.320 lat (usec): min=28198, max=92575, avg=45005.25, stdev=4039.79 00:44:14.320 clat percentiles (usec): 00:44:14.320 | 1.00th=[42730], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.320 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.320 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.320 | 99.00th=[57934], 99.50th=[62653], 99.90th=[92799], 99.95th=[92799], 00:44:14.320 | 99.99th=[92799] 00:44:14.320 bw ( KiB/s): min= 1277, max= 1536, per=4.12%, avg=1407.84, stdev=74.19, samples=19 00:44:14.320 iops : min= 319, max= 384, avg=351.95, stdev=18.57, samples=19 00:44:14.320 lat (msec) : 50=96.83%, 100=3.17% 00:44:14.320 cpu : usr=98.05%, sys=1.41%, ctx=14, majf=0, minf=1631 00:44:14.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename0: (groupid=0, jobs=1): err= 0: pid=3211066: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10018msec) 00:44:14.320 slat (nsec): min=13119, max=72951, avg=31910.40, stdev=8708.63 00:44:14.320 clat (msec): min=23, max=100, avg=45.03, stdev= 4.75 00:44:14.320 lat (msec): min=23, max=100, avg=45.06, stdev= 4.75 00:44:14.320 clat percentiles (msec): 00:44:14.320 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:44:14.320 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.320 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.320 | 99.00th=[ 62], 99.50th=[ 77], 99.90th=[ 101], 99.95th=[ 101], 00:44:14.320 | 99.99th=[ 101] 00:44:14.320 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=60.34, samples=19 00:44:14.320 iops : min= 320, max= 384, avg=352.00, stdev=15.08, samples=19 00:44:14.320 lat (msec) : 50=97.12%, 100=2.43%, 250=0.45% 00:44:14.320 cpu : usr=96.99%, sys=1.91%, ctx=70, majf=0, minf=1633 00:44:14.320 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename1: (groupid=0, jobs=1): err= 0: pid=3211067: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=352, BW=1411KiB/s (1444kB/s)(13.8MiB/10027msec) 00:44:14.320 slat (nsec): min=14269, max=84101, avg=35628.77, stdev=8521.36 00:44:14.320 clat (msec): min=28, max=101, avg=45.04, stdev= 4.53 00:44:14.320 lat (msec): min=28, max=101, avg=45.08, stdev= 4.53 00:44:14.320 clat percentiles (msec): 00:44:14.320 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:44:14.320 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.320 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.320 | 99.00th=[ 58], 99.50th=[ 63], 99.90th=[ 102], 99.95th=[ 102], 00:44:14.320 | 99.99th=[ 102] 00:44:14.320 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.11, stdev=73.71, samples=19 00:44:14.320 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.320 lat (msec) : 50=96.72%, 100=2.83%, 250=0.45% 00:44:14.320 cpu : usr=96.88%, sys=1.89%, ctx=210, majf=0, minf=1631 00:44:14.320 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.320 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.320 filename1: (groupid=0, jobs=1): err= 0: pid=3211068: Mon Nov 18 18:51:11 2024 00:44:14.320 read: IOPS=354, BW=1419KiB/s (1453kB/s)(13.9MiB/10012msec) 00:44:14.320 slat (nsec): min=9323, max=70611, avg=28658.72, stdev=9066.18 00:44:14.320 clat (usec): min=26132, max=62416, avg=44846.69, stdev=2592.24 00:44:14.320 lat (usec): min=26175, max=62476, avg=44875.35, stdev=2591.70 00:44:14.320 clat percentiles (usec): 00:44:14.320 | 1.00th=[41681], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:44:14.320 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:14.320 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.320 | 99.00th=[55837], 99.50th=[57934], 99.90th=[62129], 99.95th=[62653], 00:44:14.320 | 99.99th=[62653] 00:44:14.320 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1414.74, stdev=67.11, samples=19 00:44:14.320 iops : min= 320, max= 384, avg=353.68, stdev=16.78, samples=19 00:44:14.320 lat (msec) : 50=97.30%, 100=2.70% 00:44:14.320 cpu : usr=98.19%, sys=1.27%, ctx=18, majf=0, minf=1632 00:44:14.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211069: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=352, BW=1412KiB/s (1445kB/s)(13.8MiB/10020msec) 00:44:14.321 slat (nsec): min=11570, max=85015, avg=31904.42, stdev=10158.76 00:44:14.321 clat (msec): min=23, max=102, avg=45.04, stdev= 4.90 00:44:14.321 lat (msec): min=23, max=102, avg=45.08, stdev= 4.90 00:44:14.321 clat percentiles (msec): 00:44:14.321 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:44:14.321 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.321 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.321 | 99.00th=[ 62], 99.50th=[ 67], 99.90th=[ 103], 99.95th=[ 103], 00:44:14.321 | 99.99th=[ 103] 00:44:14.321 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=60.34, samples=19 00:44:14.321 iops : min= 320, max= 384, avg=352.00, stdev=15.08, samples=19 00:44:14.321 lat (msec) : 50=97.06%, 100=2.49%, 250=0.45% 00:44:14.321 cpu : usr=98.35%, sys=1.16%, ctx=14, majf=0, minf=1631 00:44:14.321 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211070: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=401, BW=1606KiB/s (1644kB/s)(15.7MiB/10018msec) 00:44:14.321 slat (usec): min=10, max=162, avg=30.53, stdev=17.78 00:44:14.321 clat (msec): min=20, max=121, avg=39.61, stdev= 9.48 00:44:14.321 lat (msec): min=20, max=121, avg=39.64, stdev= 9.48 00:44:14.321 clat percentiles (msec): 00:44:14.321 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 32], 00:44:14.321 | 30.00th=[ 34], 40.00th=[ 38], 50.00th=[ 42], 60.00th=[ 44], 00:44:14.321 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 57], 00:44:14.321 | 99.00th=[ 63], 99.50th=[ 67], 99.90th=[ 122], 99.95th=[ 122], 00:44:14.321 | 99.99th=[ 122] 00:44:14.321 bw ( KiB/s): min= 1152, max= 1808, per=4.67%, avg=1593.26, stdev=165.70, samples=19 00:44:14.321 iops : min= 288, max= 452, avg=398.32, stdev=41.43, samples=19 00:44:14.321 lat (msec) : 50=92.99%, 100=6.61%, 250=0.40% 00:44:14.321 cpu : usr=96.64%, sys=2.13%, ctx=91, majf=0, minf=1633 00:44:14.321 IO depths : 1=1.8%, 2=3.8%, 4=11.3%, 8=71.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=4022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211071: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10023msec) 00:44:14.321 slat (nsec): min=13581, max=76931, avg=36095.28, stdev=9332.94 00:44:14.321 clat (msec): min=23, max=124, avg=45.02, stdev= 5.11 00:44:14.321 lat (msec): min=23, max=124, avg=45.05, stdev= 5.11 00:44:14.321 clat percentiles (msec): 00:44:14.321 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:44:14.321 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.321 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.321 | 99.00th=[ 62], 99.50th=[ 78], 99.90th=[ 105], 99.95th=[ 125], 00:44:14.321 | 99.99th=[ 125] 00:44:14.321 bw ( KiB/s): min= 1152, max= 1536, per=4.13%, avg=1408.00, stdev=95.41, samples=19 00:44:14.321 iops : min= 288, max= 384, avg=352.00, stdev=23.85, samples=19 00:44:14.321 lat (msec) : 50=97.23%, 100=2.32%, 250=0.45% 00:44:14.321 cpu : usr=98.17%, sys=1.30%, ctx=14, majf=0, minf=1633 00:44:14.321 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211072: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10021msec) 00:44:14.321 slat (nsec): min=12371, max=84017, avg=32906.39, stdev=9273.64 00:44:14.321 clat (msec): min=24, max=101, avg=45.06, stdev= 3.67 00:44:14.321 lat (msec): min=24, max=102, avg=45.09, stdev= 3.67 00:44:14.321 clat percentiles (msec): 00:44:14.321 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:44:14.321 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:14.321 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:44:14.321 | 99.00th=[ 62], 99.50th=[ 69], 99.90th=[ 83], 99.95th=[ 103], 00:44:14.321 | 99.99th=[ 103] 00:44:14.321 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.70, stdev=71.75, samples=20 00:44:14.321 iops : min= 320, max= 384, avg=352.00, stdev=17.98, samples=20 00:44:14.321 lat (msec) : 50=97.23%, 100=2.71%, 250=0.06% 00:44:14.321 cpu : usr=98.07%, sys=1.37%, ctx=29, majf=0, minf=1633 00:44:14.321 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211073: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=354, BW=1419KiB/s (1453kB/s)(13.9MiB/10012msec) 00:44:14.321 slat (nsec): min=7511, max=83924, avg=32337.79, stdev=11067.01 00:44:14.321 clat (usec): min=26190, max=62412, avg=44815.26, stdev=2578.99 00:44:14.321 lat (usec): min=26236, max=62472, avg=44847.59, stdev=2579.70 00:44:14.321 clat percentiles (usec): 00:44:14.321 | 1.00th=[41681], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:44:14.321 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:14.321 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.321 | 99.00th=[55837], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:44:14.321 | 99.99th=[62653] 00:44:14.321 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1414.74, stdev=67.11, samples=19 00:44:14.321 iops : min= 320, max= 384, avg=353.68, stdev=16.78, samples=19 00:44:14.321 lat (msec) : 50=97.30%, 100=2.70% 00:44:14.321 cpu : usr=98.25%, sys=1.25%, ctx=18, majf=0, minf=1634 00:44:14.321 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename1: (groupid=0, jobs=1): err= 0: pid=3211074: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10017msec) 00:44:14.321 slat (usec): min=6, max=100, avg=45.04, stdev=18.98 00:44:14.321 clat (usec): min=30591, max=89270, avg=44912.32, stdev=3824.46 00:44:14.321 lat (usec): min=30609, max=89290, avg=44957.35, stdev=3818.31 00:44:14.321 clat percentiles (usec): 00:44:14.321 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:44:14.321 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.321 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47973], 00:44:14.321 | 99.00th=[54264], 99.50th=[62129], 99.90th=[89654], 99.95th=[89654], 00:44:14.321 | 99.99th=[89654] 00:44:14.321 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=73.90, samples=19 00:44:14.321 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.321 lat (msec) : 50=97.23%, 100=2.77% 00:44:14.321 cpu : usr=96.72%, sys=2.00%, ctx=173, majf=0, minf=1631 00:44:14.321 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.321 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.321 filename2: (groupid=0, jobs=1): err= 0: pid=3211075: Mon Nov 18 18:51:11 2024 00:44:14.321 read: IOPS=353, BW=1416KiB/s (1450kB/s)(13.9MiB/10037msec) 00:44:14.321 slat (nsec): min=15554, max=97733, avg=47804.97, stdev=16534.24 00:44:14.321 clat (usec): min=26135, max=70819, avg=44775.69, stdev=3160.32 00:44:14.322 lat (usec): min=26169, max=70846, avg=44823.50, stdev=3155.15 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[38536], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:44:14.322 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[47449], 00:44:14.322 | 99.00th=[57934], 99.50th=[62129], 99.90th=[70779], 99.95th=[70779], 00:44:14.322 | 99.99th=[70779] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1411.95, stdev=66.49, samples=20 00:44:14.322 iops : min= 320, max= 384, avg=352.95, stdev=16.66, samples=20 00:44:14.322 lat (msec) : 50=96.68%, 100=3.32% 00:44:14.322 cpu : usr=97.73%, sys=1.54%, ctx=41, majf=0, minf=1633 00:44:14.322 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211076: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10029msec) 00:44:14.322 slat (nsec): min=5816, max=75044, avg=36967.66, stdev=9814.11 00:44:14.322 clat (usec): min=33024, max=98685, avg=45046.49, stdev=3795.18 00:44:14.322 lat (usec): min=33074, max=98705, avg=45083.46, stdev=3793.84 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.322 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.322 | 99.00th=[57934], 99.50th=[62653], 99.90th=[89654], 99.95th=[98042], 00:44:14.322 | 99.99th=[99091] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.12%, avg=1406.60, stdev=72.20, samples=20 00:44:14.322 iops : min= 320, max= 384, avg=351.65, stdev=18.05, samples=20 00:44:14.322 lat (msec) : 50=96.86%, 100=3.14% 00:44:14.322 cpu : usr=98.31%, sys=1.19%, ctx=14, majf=0, minf=1635 00:44:14.322 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211077: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=354, BW=1416KiB/s (1450kB/s)(13.9MiB/10032msec) 00:44:14.322 slat (nsec): min=7291, max=86797, avg=38880.74, stdev=10753.08 00:44:14.322 clat (usec): min=26263, max=65446, avg=44806.02, stdev=2878.49 00:44:14.322 lat (usec): min=26309, max=65471, avg=44844.90, stdev=2877.98 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[41681], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.322 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[47449], 00:44:14.322 | 99.00th=[57934], 99.50th=[62129], 99.90th=[65274], 99.95th=[65274], 00:44:14.322 | 99.99th=[65274] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1412.60, stdev=66.01, samples=20 00:44:14.322 iops : min= 320, max= 384, avg=353.15, stdev=16.50, samples=20 00:44:14.322 lat (msec) : 50=96.88%, 100=3.12% 00:44:14.322 cpu : usr=98.22%, sys=1.24%, ctx=13, majf=0, minf=1631 00:44:14.322 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211078: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=354, BW=1420KiB/s (1454kB/s)(13.9MiB/10006msec) 00:44:14.322 slat (nsec): min=7265, max=90980, avg=35434.50, stdev=11403.52 00:44:14.322 clat (usec): min=26285, max=62352, avg=44763.54, stdev=2704.68 00:44:14.322 lat (usec): min=26353, max=62397, avg=44798.97, stdev=2705.20 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[41681], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.322 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.322 | 99.00th=[55837], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:44:14.322 | 99.99th=[62129] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1421.47, stdev=72.59, samples=19 00:44:14.322 iops : min= 320, max= 384, avg=355.37, stdev=18.15, samples=19 00:44:14.322 lat (msec) : 50=97.33%, 100=2.67% 00:44:14.322 cpu : usr=98.34%, sys=1.15%, ctx=21, majf=0, minf=1632 00:44:14.322 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211079: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10025msec) 00:44:14.322 slat (nsec): min=11520, max=82872, avg=26020.75, stdev=10216.13 00:44:14.322 clat (usec): min=30575, max=97095, avg=45128.29, stdev=4224.18 00:44:14.322 lat (usec): min=30593, max=97127, avg=45154.31, stdev=4224.50 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[41157], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:44:14.322 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47973], 00:44:14.322 | 99.00th=[56886], 99.50th=[62129], 99.90th=[96994], 99.95th=[96994], 00:44:14.322 | 99.99th=[96994] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=73.90, samples=19 00:44:14.322 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.322 lat (msec) : 50=97.06%, 100=2.94% 00:44:14.322 cpu : usr=96.97%, sys=1.84%, ctx=100, majf=0, minf=1635 00:44:14.322 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211080: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=352, BW=1412KiB/s (1445kB/s)(13.8MiB/10020msec) 00:44:14.322 slat (nsec): min=12027, max=91879, avg=36654.47, stdev=8826.37 00:44:14.322 clat (usec): min=27910, max=93748, avg=45002.36, stdev=4109.27 00:44:14.322 lat (usec): min=27933, max=93791, avg=45039.02, stdev=4108.38 00:44:14.322 clat percentiles (usec): 00:44:14.322 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:44:14.322 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:44:14.322 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47449], 00:44:14.322 | 99.00th=[57934], 99.50th=[62653], 99.90th=[93848], 99.95th=[93848], 00:44:14.322 | 99.99th=[93848] 00:44:14.322 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=73.90, samples=19 00:44:14.322 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.322 lat (msec) : 50=96.83%, 100=3.17% 00:44:14.322 cpu : usr=96.91%, sys=2.03%, ctx=139, majf=0, minf=1633 00:44:14.322 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:14.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.322 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.322 filename2: (groupid=0, jobs=1): err= 0: pid=3211081: Mon Nov 18 18:51:11 2024 00:44:14.322 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10017msec) 00:44:14.323 slat (nsec): min=5334, max=68617, avg=24902.71, stdev=9027.76 00:44:14.323 clat (usec): min=30563, max=89261, avg=45088.22, stdev=3830.50 00:44:14.323 lat (usec): min=30575, max=89286, avg=45113.12, stdev=3830.52 00:44:14.323 clat percentiles (usec): 00:44:14.323 | 1.00th=[41157], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:44:14.323 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:44:14.323 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47973], 00:44:14.323 | 99.00th=[56361], 99.50th=[62129], 99.90th=[89654], 99.95th=[89654], 00:44:14.323 | 99.99th=[89654] 00:44:14.323 bw ( KiB/s): min= 1280, max= 1536, per=4.13%, avg=1408.00, stdev=73.90, samples=19 00:44:14.323 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:44:14.323 lat (msec) : 50=97.06%, 100=2.94% 00:44:14.323 cpu : usr=97.14%, sys=1.66%, ctx=142, majf=0, minf=1635 00:44:14.323 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:14.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.323 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.323 filename2: (groupid=0, jobs=1): err= 0: pid=3211082: Mon Nov 18 18:51:11 2024 00:44:14.323 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10002msec) 00:44:14.323 slat (nsec): min=4838, max=68886, avg=16797.73, stdev=8979.70 00:44:14.323 clat (usec): min=3258, max=62443, avg=44276.45, stdev=5439.07 00:44:14.323 lat (usec): min=3270, max=62491, avg=44293.25, stdev=5438.90 00:44:14.323 clat percentiles (usec): 00:44:14.323 | 1.00th=[13304], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:44:14.323 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:44:14.323 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[47973], 00:44:14.323 | 99.00th=[52691], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:44:14.323 | 99.99th=[62653] 00:44:14.323 bw ( KiB/s): min= 1280, max= 1795, per=4.22%, avg=1441.84, stdev=103.69, samples=19 00:44:14.323 iops : min= 320, max= 448, avg=360.42, stdev=25.78, samples=19 00:44:14.323 lat (msec) : 4=0.06%, 10=0.83%, 20=1.28%, 50=95.11%, 100=2.72% 00:44:14.323 cpu : usr=97.92%, sys=1.49%, ctx=54, majf=0, minf=1634 00:44:14.323 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:14.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.323 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.323 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:14.323 00:44:14.323 Run status group 0 (all jobs): 00:44:14.323 READ: bw=33.3MiB/s (34.9MB/s), 1410KiB/s-1606KiB/s (1444kB/s-1644kB/s), io=335MiB (351MB), run=10002-10037msec 00:44:14.889 ----------------------------------------------------- 00:44:14.889 Suppressions used: 00:44:14.889 count bytes template 00:44:14.889 45 402 /usr/src/fio/parse.c 00:44:14.889 1 8 libtcmalloc_minimal.so 00:44:14.889 1 904 libcrypto.so 00:44:14.889 ----------------------------------------------------- 00:44:14.889 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.889 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 bdev_null0 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 [2024-11-18 18:51:13.024127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 bdev_null1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:14.890 { 00:44:14.890 "params": { 00:44:14.890 "name": "Nvme$subsystem", 00:44:14.890 "trtype": "$TEST_TRANSPORT", 00:44:14.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:14.890 "adrfam": "ipv4", 00:44:14.890 "trsvcid": "$NVMF_PORT", 00:44:14.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:14.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:14.890 "hdgst": ${hdgst:-false}, 00:44:14.890 "ddgst": ${ddgst:-false} 00:44:14.890 }, 00:44:14.890 "method": "bdev_nvme_attach_controller" 00:44:14.890 } 00:44:14.890 EOF 00:44:14.890 )") 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:14.890 { 00:44:14.890 "params": { 00:44:14.890 "name": "Nvme$subsystem", 00:44:14.890 "trtype": "$TEST_TRANSPORT", 00:44:14.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:14.890 "adrfam": "ipv4", 00:44:14.890 "trsvcid": "$NVMF_PORT", 00:44:14.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:14.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:14.890 "hdgst": ${hdgst:-false}, 00:44:14.890 "ddgst": ${ddgst:-false} 00:44:14.890 }, 00:44:14.890 "method": "bdev_nvme_attach_controller" 00:44:14.890 } 00:44:14.890 EOF 00:44:14.890 )") 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:14.890 "params": { 00:44:14.890 "name": "Nvme0", 00:44:14.890 "trtype": "tcp", 00:44:14.890 "traddr": "10.0.0.2", 00:44:14.890 "adrfam": "ipv4", 00:44:14.890 "trsvcid": "4420", 00:44:14.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:14.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:14.890 "hdgst": false, 00:44:14.890 "ddgst": false 00:44:14.890 }, 00:44:14.890 "method": "bdev_nvme_attach_controller" 00:44:14.890 },{ 00:44:14.890 "params": { 00:44:14.890 "name": "Nvme1", 00:44:14.890 "trtype": "tcp", 00:44:14.890 "traddr": "10.0.0.2", 00:44:14.890 "adrfam": "ipv4", 00:44:14.890 "trsvcid": "4420", 00:44:14.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:14.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:14.890 "hdgst": false, 00:44:14.890 "ddgst": false 00:44:14.890 }, 00:44:14.890 "method": "bdev_nvme_attach_controller" 00:44:14.890 }' 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:14.890 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:14.891 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:14.891 18:51:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:15.148 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:15.148 ... 00:44:15.148 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:15.148 ... 00:44:15.148 fio-3.35 00:44:15.148 Starting 4 threads 00:44:21.703 00:44:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=3213096: Mon Nov 18 18:51:19 2024 00:44:21.703 read: IOPS=1461, BW=11.4MiB/s (12.0MB/s)(57.1MiB/5004msec) 00:44:21.703 slat (nsec): min=4984, max=87527, avg=24930.65, stdev=8012.02 00:44:21.703 clat (usec): min=983, max=10030, avg=5379.68, stdev=385.71 00:44:21.703 lat (usec): min=1007, max=10057, avg=5404.61, stdev=386.06 00:44:21.703 clat percentiles (usec): 00:44:21.703 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:21.703 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:44:21.703 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5735], 00:44:21.703 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 9110], 99.95th=[ 9372], 00:44:21.703 | 99.99th=[10028] 00:44:21.703 bw ( KiB/s): min=11392, max=11904, per=25.14%, avg=11688.80, stdev=191.87, samples=10 00:44:21.703 iops : min= 1424, max= 1488, avg=1461.10, stdev=23.98, samples=10 00:44:21.703 lat (usec) : 1000=0.01% 00:44:21.703 lat (msec) : 2=0.21%, 4=0.31%, 10=99.45%, 20=0.01% 00:44:21.703 cpu : usr=90.73%, sys=6.00%, ctx=84, majf=0, minf=1636 00:44:21.703 IO depths : 1=0.7%, 2=20.4%, 4=53.4%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.703 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.703 issued rwts: total=7315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:21.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:21.703 filename0: (groupid=0, jobs=1): err= 0: pid=3213097: Mon Nov 18 18:51:19 2024 00:44:21.703 read: IOPS=1449, BW=11.3MiB/s (11.9MB/s)(56.6MiB/5002msec) 00:44:21.703 slat (nsec): min=4731, max=67091, avg=23847.02, stdev=9306.42 00:44:21.703 clat (usec): min=1037, max=15989, avg=5431.06, stdev=563.01 00:44:21.703 lat (usec): min=1060, max=16005, avg=5454.91, stdev=562.58 00:44:21.703 clat percentiles (usec): 00:44:21.703 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:21.703 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:44:21.703 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5800], 00:44:21.703 | 99.00th=[ 7963], 99.50th=[ 8979], 99.90th=[13042], 99.95th=[13042], 00:44:21.703 | 99.99th=[15926] 00:44:21.703 bw ( KiB/s): min=11248, max=11776, per=24.87%, avg=11564.44, stdev=209.65, samples=9 00:44:21.703 iops : min= 1406, max= 1472, avg=1445.56, stdev=26.21, samples=9 00:44:21.703 lat (msec) : 2=0.19%, 4=0.58%, 10=99.09%, 20=0.14% 00:44:21.703 cpu : usr=95.68%, sys=3.76%, ctx=7, majf=0, minf=1634 00:44:21.703 IO depths : 1=1.0%, 2=18.8%, 4=55.5%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.703 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.703 issued rwts: total=7249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:21.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:21.703 filename1: (groupid=0, jobs=1): err= 0: pid=3213098: Mon Nov 18 18:51:19 2024 00:44:21.703 read: IOPS=1448, BW=11.3MiB/s (11.9MB/s)(56.6MiB/5001msec) 00:44:21.703 slat (nsec): min=5231, max=68315, avg=25183.17, stdev=10182.24 00:44:21.703 clat (usec): min=962, max=16954, avg=5427.63, stdev=653.63 00:44:21.704 lat (usec): min=982, max=16971, avg=5452.81, stdev=653.17 00:44:21.704 clat percentiles (usec): 00:44:21.704 | 1.00th=[ 3949], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:21.704 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:44:21.704 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5669], 95.00th=[ 5800], 00:44:21.704 | 99.00th=[ 8586], 99.50th=[ 9634], 99.90th=[12780], 99.95th=[12780], 00:44:21.704 | 99.99th=[16909] 00:44:21.704 bw ( KiB/s): min=11216, max=11776, per=24.85%, avg=11554.44, stdev=214.64, samples=9 00:44:21.704 iops : min= 1402, max= 1472, avg=1444.22, stdev=26.88, samples=9 00:44:21.704 lat (usec) : 1000=0.01% 00:44:21.704 lat (msec) : 2=0.44%, 4=0.55%, 10=98.83%, 20=0.17% 00:44:21.704 cpu : usr=95.42%, sys=3.98%, ctx=9, majf=0, minf=1636 00:44:21.704 IO depths : 1=0.8%, 2=18.7%, 4=55.7%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.704 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.704 issued rwts: total=7242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:21.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:21.704 filename1: (groupid=0, jobs=1): err= 0: pid=3213099: Mon Nov 18 18:51:19 2024 00:44:21.704 read: IOPS=1453, BW=11.4MiB/s (11.9MB/s)(56.8MiB/5003msec) 00:44:21.704 slat (nsec): min=5069, max=68288, avg=23257.48, stdev=10227.81 00:44:21.704 clat (usec): min=1160, max=16150, avg=5417.90, stdev=481.26 00:44:21.704 lat (usec): min=1179, max=16166, avg=5441.16, stdev=481.19 00:44:21.704 clat percentiles (usec): 00:44:21.704 | 1.00th=[ 4555], 5.00th=[ 5080], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:21.704 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:44:21.704 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5669], 95.00th=[ 5735], 00:44:21.704 | 99.00th=[ 6390], 99.50th=[ 8225], 99.90th=[13829], 99.95th=[13829], 00:44:21.704 | 99.99th=[16188] 00:44:21.704 bw ( KiB/s): min=11392, max=11904, per=25.01%, avg=11626.20, stdev=198.31, samples=10 00:44:21.704 iops : min= 1424, max= 1488, avg=1453.20, stdev=24.86, samples=10 00:44:21.704 lat (msec) : 2=0.10%, 4=0.37%, 10=99.40%, 20=0.14% 00:44:21.704 cpu : usr=95.52%, sys=3.92%, ctx=7, majf=0, minf=1634 00:44:21.704 IO depths : 1=1.2%, 2=18.2%, 4=55.2%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:21.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.704 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.704 issued rwts: total=7273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:21.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:21.704 00:44:21.704 Run status group 0 (all jobs): 00:44:21.704 READ: bw=45.4MiB/s (47.6MB/s), 11.3MiB/s-11.4MiB/s (11.9MB/s-12.0MB/s), io=227MiB (238MB), run=5001-5004msec 00:44:22.270 ----------------------------------------------------- 00:44:22.270 Suppressions used: 00:44:22.270 count bytes template 00:44:22.270 6 52 /usr/src/fio/parse.c 00:44:22.270 1 8 libtcmalloc_minimal.so 00:44:22.270 1 904 libcrypto.so 00:44:22.270 ----------------------------------------------------- 00:44:22.270 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 00:44:22.528 real 0m28.387s 00:44:22.528 user 4m36.406s 00:44:22.528 sys 0m7.175s 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 ************************************ 00:44:22.528 END TEST fio_dif_rand_params 00:44:22.528 ************************************ 00:44:22.528 18:51:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:22.528 18:51:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:22.528 18:51:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 ************************************ 00:44:22.528 START TEST fio_dif_digest 00:44:22.528 ************************************ 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 bdev_null0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.528 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:22.529 [2024-11-18 18:51:20.753727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:22.529 { 00:44:22.529 "params": { 00:44:22.529 "name": "Nvme$subsystem", 00:44:22.529 "trtype": "$TEST_TRANSPORT", 00:44:22.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:22.529 "adrfam": "ipv4", 00:44:22.529 "trsvcid": "$NVMF_PORT", 00:44:22.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:22.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:22.529 "hdgst": ${hdgst:-false}, 00:44:22.529 "ddgst": ${ddgst:-false} 00:44:22.529 }, 00:44:22.529 "method": "bdev_nvme_attach_controller" 00:44:22.529 } 00:44:22.529 EOF 00:44:22.529 )") 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:22.529 "params": { 00:44:22.529 "name": "Nvme0", 00:44:22.529 "trtype": "tcp", 00:44:22.529 "traddr": "10.0.0.2", 00:44:22.529 "adrfam": "ipv4", 00:44:22.529 "trsvcid": "4420", 00:44:22.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:22.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:22.529 "hdgst": true, 00:44:22.529 "ddgst": true 00:44:22.529 }, 00:44:22.529 "method": "bdev_nvme_attach_controller" 00:44:22.529 }' 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:22.529 18:51:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:22.787 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:22.787 ... 00:44:22.787 fio-3.35 00:44:22.787 Starting 3 threads 00:44:34.985 00:44:34.985 filename0: (groupid=0, jobs=1): err= 0: pid=3213993: Mon Nov 18 18:51:32 2024 00:44:34.985 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(218MiB/10047msec) 00:44:34.985 slat (nsec): min=6525, max=55491, avg=23601.24, stdev=5596.28 00:44:34.985 clat (usec): min=10370, max=54142, avg=17240.62, stdev=1854.49 00:44:34.985 lat (usec): min=10394, max=54168, avg=17264.23, stdev=1854.34 00:44:34.985 clat percentiles (usec): 00:44:34.985 | 1.00th=[11994], 5.00th=[15008], 10.00th=[15664], 20.00th=[16188], 00:44:34.985 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17695], 00:44:34.985 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:44:34.985 | 99.00th=[20317], 99.50th=[20579], 99.90th=[51643], 99.95th=[54264], 00:44:34.985 | 99.99th=[54264] 00:44:34.985 bw ( KiB/s): min=21504, max=24320, per=34.76%, avg=22284.80, stdev=735.77, samples=20 00:44:34.985 iops : min= 168, max= 190, avg=174.10, stdev= 5.75, samples=20 00:44:34.985 lat (msec) : 20=98.28%, 50=1.61%, 100=0.11% 00:44:34.985 cpu : usr=93.99%, sys=5.42%, ctx=21, majf=0, minf=1636 00:44:34.985 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.985 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.985 filename0: (groupid=0, jobs=1): err= 0: pid=3213994: Mon Nov 18 18:51:32 2024 00:44:34.985 read: IOPS=158, BW=19.9MiB/s (20.8MB/s)(200MiB/10045msec) 00:44:34.985 slat (nsec): min=6817, max=48163, avg=20671.92, stdev=4226.47 00:44:34.985 clat (usec): min=14199, max=60855, avg=18818.65, stdev=3483.67 00:44:34.985 lat (usec): min=14218, max=60873, avg=18839.32, stdev=3483.52 00:44:34.985 clat percentiles (usec): 00:44:34.985 | 1.00th=[15533], 5.00th=[16712], 10.00th=[16909], 20.00th=[17433], 00:44:34.985 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:44:34.985 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20317], 95.00th=[20841], 00:44:34.985 | 99.00th=[22676], 99.50th=[57410], 99.90th=[60556], 99.95th=[61080], 00:44:34.985 | 99.99th=[61080] 00:44:34.985 bw ( KiB/s): min=18944, max=21504, per=31.85%, avg=20416.00, stdev=737.65, samples=20 00:44:34.985 iops : min= 148, max= 168, avg=159.50, stdev= 5.76, samples=20 00:44:34.985 lat (msec) : 20=87.60%, 50=11.77%, 100=0.63% 00:44:34.985 cpu : usr=93.46%, sys=5.97%, ctx=13, majf=0, minf=1634 00:44:34.985 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.986 issued rwts: total=1597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.986 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.986 filename0: (groupid=0, jobs=1): err= 0: pid=3213995: Mon Nov 18 18:51:32 2024 00:44:34.986 read: IOPS=168, BW=21.1MiB/s (22.1MB/s)(212MiB/10046msec) 00:44:34.986 slat (nsec): min=6923, max=46463, avg=20625.50, stdev=4944.29 00:44:34.986 clat (usec): min=10926, max=55979, avg=17761.17, stdev=1895.33 00:44:34.986 lat (usec): min=10962, max=56013, avg=17781.79, stdev=1895.55 00:44:34.986 clat percentiles (usec): 00:44:34.986 | 1.00th=[13304], 5.00th=[15401], 10.00th=[16057], 20.00th=[16581], 00:44:34.986 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:44:34.986 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:44:34.986 | 99.00th=[21365], 99.50th=[21627], 99.90th=[47973], 99.95th=[55837], 00:44:34.986 | 99.99th=[55837] 00:44:34.986 bw ( KiB/s): min=20224, max=23552, per=33.75%, avg=21634.15, stdev=1038.81, samples=20 00:44:34.986 iops : min= 158, max= 184, avg=169.00, stdev= 8.12, samples=20 00:44:34.986 lat (msec) : 20=93.38%, 50=6.56%, 100=0.06% 00:44:34.986 cpu : usr=93.67%, sys=5.78%, ctx=18, majf=0, minf=1634 00:44:34.986 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.986 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.986 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:34.986 00:44:34.986 Run status group 0 (all jobs): 00:44:34.986 READ: bw=62.6MiB/s (65.6MB/s), 19.9MiB/s-21.7MiB/s (20.8MB/s-22.7MB/s), io=629MiB (660MB), run=10045-10047msec 00:44:34.986 ----------------------------------------------------- 00:44:34.986 Suppressions used: 00:44:34.986 count bytes template 00:44:34.986 5 44 /usr/src/fio/parse.c 00:44:34.986 1 8 libtcmalloc_minimal.so 00:44:34.986 1 904 libcrypto.so 00:44:34.986 ----------------------------------------------------- 00:44:34.986 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.986 00:44:34.986 real 0m12.391s 00:44:34.986 user 0m30.463s 00:44:34.986 sys 0m2.179s 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:34.986 18:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:34.986 ************************************ 00:44:34.986 END TEST fio_dif_digest 00:44:34.986 ************************************ 00:44:34.986 18:51:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:34.986 18:51:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:34.986 rmmod nvme_tcp 00:44:34.986 rmmod nvme_fabrics 00:44:34.986 rmmod nvme_keyring 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3206503 ']' 00:44:34.986 18:51:33 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3206503 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3206503 ']' 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3206503 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206503 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206503' 00:44:34.986 killing process with pid 3206503 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3206503 00:44:34.986 18:51:33 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3206503 00:44:36.359 18:51:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:36.359 18:51:34 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:37.293 Waiting for block devices as requested 00:44:37.293 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:37.552 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:37.552 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:37.552 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:37.810 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:37.811 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:37.811 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:37.811 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:38.069 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:38.069 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:38.069 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:38.069 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:38.069 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:38.327 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:38.327 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:38.327 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:38.585 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:38.585 18:51:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.585 18:51:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:38.585 18:51:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.113 18:51:38 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:41.113 00:44:41.113 real 1m16.792s 00:44:41.113 user 6m47.101s 00:44:41.113 sys 0m18.541s 00:44:41.113 18:51:38 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:41.113 18:51:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.113 ************************************ 00:44:41.113 END TEST nvmf_dif 00:44:41.113 ************************************ 00:44:41.113 18:51:38 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:41.113 18:51:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:41.113 18:51:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:41.113 18:51:38 -- common/autotest_common.sh@10 -- # set +x 00:44:41.113 ************************************ 00:44:41.113 START TEST nvmf_abort_qd_sizes 00:44:41.113 ************************************ 00:44:41.113 18:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:41.113 * Looking for test storage... 00:44:41.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:41.113 18:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:41.113 18:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:44:41.113 18:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:41.113 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:41.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.114 --rc genhtml_branch_coverage=1 00:44:41.114 --rc genhtml_function_coverage=1 00:44:41.114 --rc genhtml_legend=1 00:44:41.114 --rc geninfo_all_blocks=1 00:44:41.114 --rc geninfo_unexecuted_blocks=1 00:44:41.114 00:44:41.114 ' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:41.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.114 --rc genhtml_branch_coverage=1 00:44:41.114 --rc genhtml_function_coverage=1 00:44:41.114 --rc genhtml_legend=1 00:44:41.114 --rc geninfo_all_blocks=1 00:44:41.114 --rc geninfo_unexecuted_blocks=1 00:44:41.114 00:44:41.114 ' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:41.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.114 --rc genhtml_branch_coverage=1 00:44:41.114 --rc genhtml_function_coverage=1 00:44:41.114 --rc genhtml_legend=1 00:44:41.114 --rc geninfo_all_blocks=1 00:44:41.114 --rc geninfo_unexecuted_blocks=1 00:44:41.114 00:44:41.114 ' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:41.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.114 --rc genhtml_branch_coverage=1 00:44:41.114 --rc genhtml_function_coverage=1 00:44:41.114 --rc genhtml_legend=1 00:44:41.114 --rc geninfo_all_blocks=1 00:44:41.114 --rc geninfo_unexecuted_blocks=1 00:44:41.114 00:44:41.114 ' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:41.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:41.114 18:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:43.016 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:43.016 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:43.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.016 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:43.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:43.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:43.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:44:43.017 00:44:43.017 --- 10.0.0.2 ping statistics --- 00:44:43.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.017 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:43.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:43.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:44:43.017 00:44:43.017 --- 10.0.0.1 ping statistics --- 00:44:43.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.017 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:43.017 18:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:44.391 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:44.391 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:44.391 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:45.325 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3219143 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3219143 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3219143 ']' 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:45.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:45.325 18:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:45.583 [2024-11-18 18:51:43.744550] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:44:45.583 [2024-11-18 18:51:43.744733] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:45.583 [2024-11-18 18:51:43.891062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:45.842 [2024-11-18 18:51:44.031948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:45.842 [2024-11-18 18:51:44.032034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:45.842 [2024-11-18 18:51:44.032060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:45.842 [2024-11-18 18:51:44.032084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:45.842 [2024-11-18 18:51:44.032104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:45.842 [2024-11-18 18:51:44.034979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:45.842 [2024-11-18 18:51:44.035051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:45.842 [2024-11-18 18:51:44.035146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:45.842 [2024-11-18 18:51:44.035154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:46.408 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:46.409 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:46.409 18:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:46.409 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:46.409 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:46.409 18:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.667 ************************************ 00:44:46.667 START TEST spdk_target_abort 00:44:46.667 ************************************ 00:44:46.667 18:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:46.667 18:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:46.667 18:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:46.667 18:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.667 18:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.949 spdk_targetn1 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.949 [2024-11-18 18:51:47.645543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.949 [2024-11-18 18:51:47.692200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:49.949 18:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:53.235 Initializing NVMe Controllers 00:44:53.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:53.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:53.235 Initialization complete. Launching workers. 00:44:53.235 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8462, failed: 0 00:44:53.235 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1173, failed to submit 7289 00:44:53.235 success 756, unsuccessful 417, failed 0 00:44:53.235 18:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:53.235 18:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:56.648 Initializing NVMe Controllers 00:44:56.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:56.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:56.648 Initialization complete. Launching workers. 00:44:56.648 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8523, failed: 0 00:44:56.648 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 7265 00:44:56.648 success 313, unsuccessful 945, failed 0 00:44:56.648 18:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:56.648 18:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:59.933 Initializing NVMe Controllers 00:44:59.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:59.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:59.933 Initialization complete. Launching workers. 00:44:59.933 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27503, failed: 0 00:44:59.933 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2699, failed to submit 24804 00:44:59.933 success 212, unsuccessful 2487, failed 0 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:59.933 18:51:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3219143 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3219143 ']' 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3219143 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219143 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:00.867 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219143' 00:45:00.867 killing process with pid 3219143 00:45:00.868 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3219143 00:45:00.868 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3219143 00:45:01.801 00:45:01.801 real 0m15.213s 00:45:01.801 user 0m59.437s 00:45:01.801 sys 0m2.789s 00:45:01.801 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:01.801 18:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:01.801 ************************************ 00:45:01.801 END TEST spdk_target_abort 00:45:01.801 ************************************ 00:45:01.801 18:51:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:01.801 18:51:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:01.801 18:51:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:01.801 18:51:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:01.801 ************************************ 00:45:01.801 START TEST kernel_target_abort 00:45:01.801 ************************************ 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:01.801 18:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:03.174 Waiting for block devices as requested 00:45:03.174 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:03.174 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:03.174 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:03.432 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:03.432 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:03.432 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:03.432 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:03.432 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:03.690 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:03.690 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:03.690 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:03.947 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:03.947 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:03.947 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:03.947 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:04.205 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:04.205 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:04.771 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:04.771 No valid GPT data, bailing 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:45:04.772 00:45:04.772 Discovery Log Number of Records 2, Generation counter 2 00:45:04.772 =====Discovery Log Entry 0====== 00:45:04.772 trtype: tcp 00:45:04.772 adrfam: ipv4 00:45:04.772 subtype: current discovery subsystem 00:45:04.772 treq: not specified, sq flow control disable supported 00:45:04.772 portid: 1 00:45:04.772 trsvcid: 4420 00:45:04.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:04.772 traddr: 10.0.0.1 00:45:04.772 eflags: none 00:45:04.772 sectype: none 00:45:04.772 =====Discovery Log Entry 1====== 00:45:04.772 trtype: tcp 00:45:04.772 adrfam: ipv4 00:45:04.772 subtype: nvme subsystem 00:45:04.772 treq: not specified, sq flow control disable supported 00:45:04.772 portid: 1 00:45:04.772 trsvcid: 4420 00:45:04.772 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:04.772 traddr: 10.0.0.1 00:45:04.772 eflags: none 00:45:04.772 sectype: none 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:04.772 18:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:08.053 Initializing NVMe Controllers 00:45:08.053 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:08.053 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:08.053 Initialization complete. Launching workers. 00:45:08.053 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36955, failed: 0 00:45:08.053 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36955, failed to submit 0 00:45:08.053 success 0, unsuccessful 36955, failed 0 00:45:08.053 18:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:08.053 18:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:11.333 Initializing NVMe Controllers 00:45:11.333 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:11.333 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:11.333 Initialization complete. Launching workers. 00:45:11.333 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66283, failed: 0 00:45:11.333 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16726, failed to submit 49557 00:45:11.333 success 0, unsuccessful 16726, failed 0 00:45:11.333 18:52:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:11.333 18:52:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.613 Initializing NVMe Controllers 00:45:14.613 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:14.613 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:14.613 Initialization complete. Launching workers. 00:45:14.613 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63701, failed: 0 00:45:14.613 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15914, failed to submit 47787 00:45:14.613 success 0, unsuccessful 15914, failed 0 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:14.613 18:52:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:15.547 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:15.547 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:15.547 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:15.547 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:15.547 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:15.806 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:15.806 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:15.806 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:15.806 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:15.806 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:16.739 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:16.739 00:45:16.739 real 0m14.936s 00:45:16.739 user 0m7.277s 00:45:16.739 sys 0m3.502s 00:45:16.739 18:52:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:16.739 18:52:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:16.739 ************************************ 00:45:16.739 END TEST kernel_target_abort 00:45:16.739 ************************************ 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:16.739 18:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:16.739 rmmod nvme_tcp 00:45:16.739 rmmod nvme_fabrics 00:45:16.739 rmmod nvme_keyring 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3219143 ']' 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3219143 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3219143 ']' 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3219143 00:45:16.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3219143) - No such process 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3219143 is not found' 00:45:16.739 Process with pid 3219143 is not found 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:16.739 18:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:18.112 Waiting for block devices as requested 00:45:18.112 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:18.112 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:18.112 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:18.370 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:18.370 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:18.370 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:18.370 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:18.370 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:18.628 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:18.628 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:18.628 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:18.628 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:18.885 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:18.885 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:18.885 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:18.885 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:19.141 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:19.141 18:52:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:21.663 18:52:19 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:21.663 00:45:21.663 real 0m40.503s 00:45:21.663 user 1m9.219s 00:45:21.663 sys 0m9.801s 00:45:21.663 18:52:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:21.663 18:52:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:21.663 ************************************ 00:45:21.663 END TEST nvmf_abort_qd_sizes 00:45:21.663 ************************************ 00:45:21.663 18:52:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:21.663 18:52:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:21.663 18:52:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:21.663 18:52:19 -- common/autotest_common.sh@10 -- # set +x 00:45:21.663 ************************************ 00:45:21.663 START TEST keyring_file 00:45:21.663 ************************************ 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:21.663 * Looking for test storage... 00:45:21.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:21.663 18:52:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:21.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.663 --rc genhtml_branch_coverage=1 00:45:21.663 --rc genhtml_function_coverage=1 00:45:21.663 --rc genhtml_legend=1 00:45:21.663 --rc geninfo_all_blocks=1 00:45:21.663 --rc geninfo_unexecuted_blocks=1 00:45:21.663 00:45:21.663 ' 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:21.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.663 --rc genhtml_branch_coverage=1 00:45:21.663 --rc genhtml_function_coverage=1 00:45:21.663 --rc genhtml_legend=1 00:45:21.663 --rc geninfo_all_blocks=1 00:45:21.663 --rc geninfo_unexecuted_blocks=1 00:45:21.663 00:45:21.663 ' 00:45:21.663 18:52:19 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:21.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.664 --rc genhtml_branch_coverage=1 00:45:21.664 --rc genhtml_function_coverage=1 00:45:21.664 --rc genhtml_legend=1 00:45:21.664 --rc geninfo_all_blocks=1 00:45:21.664 --rc geninfo_unexecuted_blocks=1 00:45:21.664 00:45:21.664 ' 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:21.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.664 --rc genhtml_branch_coverage=1 00:45:21.664 --rc genhtml_function_coverage=1 00:45:21.664 --rc genhtml_legend=1 00:45:21.664 --rc geninfo_all_blocks=1 00:45:21.664 --rc geninfo_unexecuted_blocks=1 00:45:21.664 00:45:21.664 ' 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:21.664 18:52:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:21.664 18:52:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:21.664 18:52:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:21.664 18:52:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:21.664 18:52:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.664 18:52:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.664 18:52:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.664 18:52:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:21.664 18:52:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:21.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0vdzO3hVBt 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0vdzO3hVBt 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0vdzO3hVBt 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0vdzO3hVBt 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8CrpGFS1B9 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:21.664 18:52:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8CrpGFS1B9 00:45:21.664 18:52:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8CrpGFS1B9 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8CrpGFS1B9 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=3225386 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:21.664 18:52:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3225386 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3225386 ']' 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:21.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:21.664 18:52:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:21.664 [2024-11-18 18:52:19.777040] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:21.664 [2024-11-18 18:52:19.777179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225386 ] 00:45:21.664 [2024-11-18 18:52:19.938383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:21.923 [2024-11-18 18:52:20.084271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:22.857 18:52:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:22.857 [2024-11-18 18:52:21.045216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:22.857 null0 00:45:22.857 [2024-11-18 18:52:21.077345] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:22.857 [2024-11-18 18:52:21.078023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.857 18:52:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:22.857 [2024-11-18 18:52:21.105373] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:22.857 request: 00:45:22.857 { 00:45:22.857 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:22.857 "secure_channel": false, 00:45:22.857 "listen_address": { 00:45:22.857 "trtype": "tcp", 00:45:22.857 "traddr": "127.0.0.1", 00:45:22.857 "trsvcid": "4420" 00:45:22.857 }, 00:45:22.857 "method": "nvmf_subsystem_add_listener", 00:45:22.857 "req_id": 1 00:45:22.857 } 00:45:22.857 Got JSON-RPC error response 00:45:22.857 response: 00:45:22.857 { 00:45:22.857 "code": -32602, 00:45:22.857 "message": "Invalid parameters" 00:45:22.857 } 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:22.857 18:52:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=3225531 00:45:22.857 18:52:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:22.857 18:52:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3225531 /var/tmp/bperf.sock 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3225531 ']' 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:22.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:22.857 18:52:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:22.858 18:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:22.858 [2024-11-18 18:52:21.192386] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:22.858 [2024-11-18 18:52:21.192541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225531 ] 00:45:23.116 [2024-11-18 18:52:21.334425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:23.374 [2024-11-18 18:52:21.470290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:23.940 18:52:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:23.940 18:52:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:23.940 18:52:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:23.940 18:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:24.199 18:52:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8CrpGFS1B9 00:45:24.199 18:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8CrpGFS1B9 00:45:24.457 18:52:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:24.457 18:52:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:24.457 18:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.457 18:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:24.457 18:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.716 18:52:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0vdzO3hVBt == \/\t\m\p\/\t\m\p\.\0\v\d\z\O\3\h\V\B\t ]] 00:45:24.716 18:52:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:24.716 18:52:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:24.716 18:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.716 18:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:24.716 18:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.974 18:52:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.8CrpGFS1B9 == \/\t\m\p\/\t\m\p\.\8\C\r\p\G\F\S\1\B\9 ]] 00:45:24.974 18:52:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:24.974 18:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:24.974 18:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:24.974 18:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.974 18:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.974 18:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.232 18:52:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:25.232 18:52:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:25.232 18:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:25.232 18:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.232 18:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.232 18:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.232 18:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:25.536 18:52:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:25.536 18:52:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.536 18:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.835 [2024-11-18 18:52:24.052291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:25.835 nvme0n1 00:45:25.835 18:52:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:25.835 18:52:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.835 18:52:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.835 18:52:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.835 18:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.835 18:52:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.401 18:52:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:26.401 18:52:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:26.401 18:52:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:26.401 18:52:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.401 18:52:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.401 18:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.401 18:52:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:26.401 18:52:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:26.401 18:52:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:26.659 Running I/O for 1 seconds... 00:45:27.593 6159.00 IOPS, 24.06 MiB/s 00:45:27.593 Latency(us) 00:45:27.593 [2024-11-18T17:52:25.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.593 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:27.593 nvme0n1 : 1.05 5966.39 23.31 0.00 0.00 20633.23 11602.30 58254.22 00:45:27.593 [2024-11-18T17:52:25.930Z] =================================================================================================================== 00:45:27.593 [2024-11-18T17:52:25.930Z] Total : 5966.39 23.31 0.00 0.00 20633.23 11602.30 58254.22 00:45:27.593 { 00:45:27.593 "results": [ 00:45:27.593 { 00:45:27.593 "job": "nvme0n1", 00:45:27.593 "core_mask": "0x2", 00:45:27.593 "workload": "randrw", 00:45:27.593 "percentage": 50, 00:45:27.593 "status": "finished", 00:45:27.593 "queue_depth": 128, 00:45:27.593 "io_size": 4096, 00:45:27.594 "runtime": 1.053903, 00:45:27.594 "iops": 5966.393491621146, 00:45:27.594 "mibps": 23.3062245766451, 00:45:27.594 "io_failed": 0, 00:45:27.594 "io_timeout": 0, 00:45:27.594 "avg_latency_us": 20633.232321176136, 00:45:27.594 "min_latency_us": 11602.29925925926, 00:45:27.594 "max_latency_us": 58254.22222222222 00:45:27.594 } 00:45:27.594 ], 00:45:27.594 "core_count": 1 00:45:27.594 } 00:45:27.594 18:52:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:27.594 18:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:27.852 18:52:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:27.852 18:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.852 18:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.852 18:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.852 18:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.852 18:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.419 18:52:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:28.419 18:52:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:28.419 18:52:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:28.419 18:52:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:28.419 18:52:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.419 18:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.677 [2024-11-18 18:52:26.993187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:28.677 [2024-11-18 18:52:26.993414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:28.677 [2024-11-18 18:52:26.994391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:28.677 [2024-11-18 18:52:26.995388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:28.677 [2024-11-18 18:52:26.995423] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:28.677 [2024-11-18 18:52:26.995448] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:28.677 [2024-11-18 18:52:26.995475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:28.677 request: 00:45:28.677 { 00:45:28.677 "name": "nvme0", 00:45:28.677 "trtype": "tcp", 00:45:28.677 "traddr": "127.0.0.1", 00:45:28.677 "adrfam": "ipv4", 00:45:28.677 "trsvcid": "4420", 00:45:28.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:28.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:28.677 "prchk_reftag": false, 00:45:28.677 "prchk_guard": false, 00:45:28.677 "hdgst": false, 00:45:28.678 "ddgst": false, 00:45:28.678 "psk": "key1", 00:45:28.678 "allow_unrecognized_csi": false, 00:45:28.678 "method": "bdev_nvme_attach_controller", 00:45:28.678 "req_id": 1 00:45:28.678 } 00:45:28.678 Got JSON-RPC error response 00:45:28.678 response: 00:45:28.678 { 00:45:28.678 "code": -5, 00:45:28.678 "message": "Input/output error" 00:45:28.678 } 00:45:28.678 18:52:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:28.678 18:52:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:28.678 18:52:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:28.678 18:52:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:28.678 18:52:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:28.936 18:52:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:28.936 18:52:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.936 18:52:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.936 18:52:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.936 18:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.194 18:52:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:29.194 18:52:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:29.194 18:52:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.194 18:52:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.194 18:52:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.194 18:52:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:29.194 18:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.452 18:52:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:29.452 18:52:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:29.452 18:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:29.709 18:52:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:29.710 18:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:29.967 18:52:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:29.967 18:52:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:29.967 18:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.225 18:52:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:30.225 18:52:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0vdzO3hVBt 00:45:30.225 18:52:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:30.225 18:52:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.226 18:52:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.226 18:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.484 [2024-11-18 18:52:28.708700] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0vdzO3hVBt': 0100660 00:45:30.484 [2024-11-18 18:52:28.708750] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:30.484 request: 00:45:30.484 { 00:45:30.484 "name": "key0", 00:45:30.484 "path": "/tmp/tmp.0vdzO3hVBt", 00:45:30.484 "method": "keyring_file_add_key", 00:45:30.484 "req_id": 1 00:45:30.484 } 00:45:30.484 Got JSON-RPC error response 00:45:30.484 response: 00:45:30.484 { 00:45:30.484 "code": -1, 00:45:30.484 "message": "Operation not permitted" 00:45:30.484 } 00:45:30.484 18:52:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:30.484 18:52:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:30.484 18:52:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:30.484 18:52:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:30.484 18:52:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0vdzO3hVBt 00:45:30.484 18:52:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.484 18:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0vdzO3hVBt 00:45:30.742 18:52:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0vdzO3hVBt 00:45:30.742 18:52:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:30.742 18:52:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:30.742 18:52:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.742 18:52:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.742 18:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.742 18:52:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.001 18:52:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:31.001 18:52:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:31.001 18:52:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.001 18:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.260 [2024-11-18 18:52:29.539043] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0vdzO3hVBt': No such file or directory 00:45:31.260 [2024-11-18 18:52:29.539096] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:31.260 [2024-11-18 18:52:29.539135] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:31.260 [2024-11-18 18:52:29.539160] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:31.260 [2024-11-18 18:52:29.539184] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:31.260 [2024-11-18 18:52:29.539207] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:31.260 request: 00:45:31.260 { 00:45:31.260 "name": "nvme0", 00:45:31.260 "trtype": "tcp", 00:45:31.260 "traddr": "127.0.0.1", 00:45:31.260 "adrfam": "ipv4", 00:45:31.260 "trsvcid": "4420", 00:45:31.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:31.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:31.260 "prchk_reftag": false, 00:45:31.260 "prchk_guard": false, 00:45:31.260 "hdgst": false, 00:45:31.260 "ddgst": false, 00:45:31.260 "psk": "key0", 00:45:31.260 "allow_unrecognized_csi": false, 00:45:31.260 "method": "bdev_nvme_attach_controller", 00:45:31.260 "req_id": 1 00:45:31.260 } 00:45:31.260 Got JSON-RPC error response 00:45:31.260 response: 00:45:31.260 { 00:45:31.260 "code": -19, 00:45:31.260 "message": "No such device" 00:45:31.260 } 00:45:31.260 18:52:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:31.260 18:52:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:31.260 18:52:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:31.260 18:52:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:31.260 18:52:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:31.260 18:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:31.518 18:52:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.li87crbjrw 00:45:31.518 18:52:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:31.518 18:52:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:31.776 18:52:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.li87crbjrw 00:45:31.776 18:52:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.li87crbjrw 00:45:31.776 18:52:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.li87crbjrw 00:45:31.776 18:52:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.li87crbjrw 00:45:31.776 18:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.li87crbjrw 00:45:32.032 18:52:30 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.032 18:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.289 nvme0n1 00:45:32.289 18:52:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:32.289 18:52:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.289 18:52:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.289 18:52:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.289 18:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.289 18:52:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.547 18:52:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:32.547 18:52:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:32.547 18:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:32.805 18:52:31 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:32.805 18:52:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:32.805 18:52:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.805 18:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.805 18:52:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:33.063 18:52:31 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:33.063 18:52:31 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:33.063 18:52:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:33.063 18:52:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:33.063 18:52:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:33.063 18:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:33.063 18:52:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:33.629 18:52:31 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:33.629 18:52:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:33.629 18:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:33.629 18:52:31 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:33.629 18:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:33.629 18:52:31 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:34.194 18:52:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:34.195 18:52:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.li87crbjrw 00:45:34.195 18:52:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.li87crbjrw 00:45:34.195 18:52:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8CrpGFS1B9 00:45:34.195 18:52:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8CrpGFS1B9 00:45:34.452 18:52:32 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.452 18:52:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:35.018 nvme0n1 00:45:35.019 18:52:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:35.019 18:52:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:35.277 18:52:33 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:35.277 "subsystems": [ 00:45:35.277 { 00:45:35.277 "subsystem": "keyring", 00:45:35.277 "config": [ 00:45:35.277 { 00:45:35.277 "method": "keyring_file_add_key", 00:45:35.277 "params": { 00:45:35.277 "name": "key0", 00:45:35.277 "path": "/tmp/tmp.li87crbjrw" 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "keyring_file_add_key", 00:45:35.277 "params": { 00:45:35.277 "name": "key1", 00:45:35.277 "path": "/tmp/tmp.8CrpGFS1B9" 00:45:35.277 } 00:45:35.277 } 00:45:35.277 ] 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "subsystem": "iobuf", 00:45:35.277 "config": [ 00:45:35.277 { 00:45:35.277 "method": "iobuf_set_options", 00:45:35.277 "params": { 00:45:35.277 "small_pool_count": 8192, 00:45:35.277 "large_pool_count": 1024, 00:45:35.277 "small_bufsize": 8192, 00:45:35.277 "large_bufsize": 135168, 00:45:35.277 "enable_numa": false 00:45:35.277 } 00:45:35.277 } 00:45:35.277 ] 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "subsystem": "sock", 00:45:35.277 "config": [ 00:45:35.277 { 00:45:35.277 "method": "sock_set_default_impl", 00:45:35.277 "params": { 00:45:35.277 "impl_name": "posix" 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "sock_impl_set_options", 00:45:35.277 "params": { 00:45:35.277 "impl_name": "ssl", 00:45:35.277 "recv_buf_size": 4096, 00:45:35.277 "send_buf_size": 4096, 00:45:35.277 "enable_recv_pipe": true, 00:45:35.277 "enable_quickack": false, 00:45:35.277 "enable_placement_id": 0, 00:45:35.277 "enable_zerocopy_send_server": true, 00:45:35.277 "enable_zerocopy_send_client": false, 00:45:35.277 "zerocopy_threshold": 0, 00:45:35.277 "tls_version": 0, 00:45:35.277 "enable_ktls": false 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "sock_impl_set_options", 00:45:35.277 "params": { 00:45:35.277 "impl_name": "posix", 00:45:35.277 "recv_buf_size": 2097152, 00:45:35.277 "send_buf_size": 2097152, 00:45:35.277 "enable_recv_pipe": true, 00:45:35.277 "enable_quickack": false, 00:45:35.277 "enable_placement_id": 0, 00:45:35.277 "enable_zerocopy_send_server": true, 00:45:35.277 "enable_zerocopy_send_client": false, 00:45:35.277 "zerocopy_threshold": 0, 00:45:35.277 "tls_version": 0, 00:45:35.277 "enable_ktls": false 00:45:35.277 } 00:45:35.277 } 00:45:35.277 ] 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "subsystem": "vmd", 00:45:35.277 "config": [] 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "subsystem": "accel", 00:45:35.277 "config": [ 00:45:35.277 { 00:45:35.277 "method": "accel_set_options", 00:45:35.277 "params": { 00:45:35.277 "small_cache_size": 128, 00:45:35.277 "large_cache_size": 16, 00:45:35.277 "task_count": 2048, 00:45:35.277 "sequence_count": 2048, 00:45:35.277 "buf_count": 2048 00:45:35.277 } 00:45:35.277 } 00:45:35.277 ] 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "subsystem": "bdev", 00:45:35.277 "config": [ 00:45:35.277 { 00:45:35.277 "method": "bdev_set_options", 00:45:35.277 "params": { 00:45:35.277 "bdev_io_pool_size": 65535, 00:45:35.277 "bdev_io_cache_size": 256, 00:45:35.277 "bdev_auto_examine": true, 00:45:35.277 "iobuf_small_cache_size": 128, 00:45:35.277 "iobuf_large_cache_size": 16 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "bdev_raid_set_options", 00:45:35.277 "params": { 00:45:35.277 "process_window_size_kb": 1024, 00:45:35.277 "process_max_bandwidth_mb_sec": 0 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "bdev_iscsi_set_options", 00:45:35.277 "params": { 00:45:35.277 "timeout_sec": 30 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "bdev_nvme_set_options", 00:45:35.277 "params": { 00:45:35.277 "action_on_timeout": "none", 00:45:35.277 "timeout_us": 0, 00:45:35.277 "timeout_admin_us": 0, 00:45:35.277 "keep_alive_timeout_ms": 10000, 00:45:35.277 "arbitration_burst": 0, 00:45:35.277 "low_priority_weight": 0, 00:45:35.277 "medium_priority_weight": 0, 00:45:35.277 "high_priority_weight": 0, 00:45:35.277 "nvme_adminq_poll_period_us": 10000, 00:45:35.277 "nvme_ioq_poll_period_us": 0, 00:45:35.277 "io_queue_requests": 512, 00:45:35.277 "delay_cmd_submit": true, 00:45:35.277 "transport_retry_count": 4, 00:45:35.277 "bdev_retry_count": 3, 00:45:35.277 "transport_ack_timeout": 0, 00:45:35.277 "ctrlr_loss_timeout_sec": 0, 00:45:35.277 "reconnect_delay_sec": 0, 00:45:35.277 "fast_io_fail_timeout_sec": 0, 00:45:35.277 "disable_auto_failback": false, 00:45:35.277 "generate_uuids": false, 00:45:35.277 "transport_tos": 0, 00:45:35.277 "nvme_error_stat": false, 00:45:35.277 "rdma_srq_size": 0, 00:45:35.277 "io_path_stat": false, 00:45:35.277 "allow_accel_sequence": false, 00:45:35.277 "rdma_max_cq_size": 0, 00:45:35.277 "rdma_cm_event_timeout_ms": 0, 00:45:35.277 "dhchap_digests": [ 00:45:35.277 "sha256", 00:45:35.277 "sha384", 00:45:35.277 "sha512" 00:45:35.277 ], 00:45:35.277 "dhchap_dhgroups": [ 00:45:35.277 "null", 00:45:35.277 "ffdhe2048", 00:45:35.277 "ffdhe3072", 00:45:35.277 "ffdhe4096", 00:45:35.277 "ffdhe6144", 00:45:35.277 "ffdhe8192" 00:45:35.277 ] 00:45:35.277 } 00:45:35.277 }, 00:45:35.277 { 00:45:35.277 "method": "bdev_nvme_attach_controller", 00:45:35.277 "params": { 00:45:35.277 "name": "nvme0", 00:45:35.277 "trtype": "TCP", 00:45:35.278 "adrfam": "IPv4", 00:45:35.278 "traddr": "127.0.0.1", 00:45:35.278 "trsvcid": "4420", 00:45:35.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:35.278 "prchk_reftag": false, 00:45:35.278 "prchk_guard": false, 00:45:35.278 "ctrlr_loss_timeout_sec": 0, 00:45:35.278 "reconnect_delay_sec": 0, 00:45:35.278 "fast_io_fail_timeout_sec": 0, 00:45:35.278 "psk": "key0", 00:45:35.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:35.278 "hdgst": false, 00:45:35.278 "ddgst": false, 00:45:35.278 "multipath": "multipath" 00:45:35.278 } 00:45:35.278 }, 00:45:35.278 { 00:45:35.278 "method": "bdev_nvme_set_hotplug", 00:45:35.278 "params": { 00:45:35.278 "period_us": 100000, 00:45:35.278 "enable": false 00:45:35.278 } 00:45:35.278 }, 00:45:35.278 { 00:45:35.278 "method": "bdev_wait_for_examine" 00:45:35.278 } 00:45:35.278 ] 00:45:35.278 }, 00:45:35.278 { 00:45:35.278 "subsystem": "nbd", 00:45:35.278 "config": [] 00:45:35.278 } 00:45:35.278 ] 00:45:35.278 }' 00:45:35.278 18:52:33 keyring_file -- keyring/file.sh@115 -- # killprocess 3225531 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3225531 ']' 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3225531 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225531 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225531' 00:45:35.278 killing process with pid 3225531 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@973 -- # kill 3225531 00:45:35.278 Received shutdown signal, test time was about 1.000000 seconds 00:45:35.278 00:45:35.278 Latency(us) 00:45:35.278 [2024-11-18T17:52:33.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.278 [2024-11-18T17:52:33.615Z] =================================================================================================================== 00:45:35.278 [2024-11-18T17:52:33.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:35.278 18:52:33 keyring_file -- common/autotest_common.sh@978 -- # wait 3225531 00:45:36.213 18:52:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=3227214 00:45:36.213 18:52:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3227214 /var/tmp/bperf.sock 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3227214 ']' 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:36.213 18:52:34 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:36.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:36.213 18:52:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:36.213 18:52:34 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:36.213 "subsystems": [ 00:45:36.213 { 00:45:36.213 "subsystem": "keyring", 00:45:36.213 "config": [ 00:45:36.213 { 00:45:36.213 "method": "keyring_file_add_key", 00:45:36.213 "params": { 00:45:36.213 "name": "key0", 00:45:36.213 "path": "/tmp/tmp.li87crbjrw" 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "keyring_file_add_key", 00:45:36.213 "params": { 00:45:36.213 "name": "key1", 00:45:36.213 "path": "/tmp/tmp.8CrpGFS1B9" 00:45:36.213 } 00:45:36.213 } 00:45:36.213 ] 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "subsystem": "iobuf", 00:45:36.213 "config": [ 00:45:36.213 { 00:45:36.213 "method": "iobuf_set_options", 00:45:36.213 "params": { 00:45:36.213 "small_pool_count": 8192, 00:45:36.213 "large_pool_count": 1024, 00:45:36.213 "small_bufsize": 8192, 00:45:36.213 "large_bufsize": 135168, 00:45:36.213 "enable_numa": false 00:45:36.213 } 00:45:36.213 } 00:45:36.213 ] 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "subsystem": "sock", 00:45:36.213 "config": [ 00:45:36.213 { 00:45:36.213 "method": "sock_set_default_impl", 00:45:36.213 "params": { 00:45:36.213 "impl_name": "posix" 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "sock_impl_set_options", 00:45:36.213 "params": { 00:45:36.213 "impl_name": "ssl", 00:45:36.213 "recv_buf_size": 4096, 00:45:36.213 "send_buf_size": 4096, 00:45:36.213 "enable_recv_pipe": true, 00:45:36.213 "enable_quickack": false, 00:45:36.213 "enable_placement_id": 0, 00:45:36.213 "enable_zerocopy_send_server": true, 00:45:36.213 "enable_zerocopy_send_client": false, 00:45:36.213 "zerocopy_threshold": 0, 00:45:36.213 "tls_version": 0, 00:45:36.213 "enable_ktls": false 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "sock_impl_set_options", 00:45:36.213 "params": { 00:45:36.213 "impl_name": "posix", 00:45:36.213 "recv_buf_size": 2097152, 00:45:36.213 "send_buf_size": 2097152, 00:45:36.213 "enable_recv_pipe": true, 00:45:36.213 "enable_quickack": false, 00:45:36.213 "enable_placement_id": 0, 00:45:36.213 "enable_zerocopy_send_server": true, 00:45:36.213 "enable_zerocopy_send_client": false, 00:45:36.213 "zerocopy_threshold": 0, 00:45:36.213 "tls_version": 0, 00:45:36.213 "enable_ktls": false 00:45:36.213 } 00:45:36.213 } 00:45:36.213 ] 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "subsystem": "vmd", 00:45:36.213 "config": [] 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "subsystem": "accel", 00:45:36.213 "config": [ 00:45:36.213 { 00:45:36.213 "method": "accel_set_options", 00:45:36.213 "params": { 00:45:36.213 "small_cache_size": 128, 00:45:36.213 "large_cache_size": 16, 00:45:36.213 "task_count": 2048, 00:45:36.213 "sequence_count": 2048, 00:45:36.213 "buf_count": 2048 00:45:36.213 } 00:45:36.213 } 00:45:36.213 ] 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "subsystem": "bdev", 00:45:36.213 "config": [ 00:45:36.213 { 00:45:36.213 "method": "bdev_set_options", 00:45:36.213 "params": { 00:45:36.213 "bdev_io_pool_size": 65535, 00:45:36.213 "bdev_io_cache_size": 256, 00:45:36.213 "bdev_auto_examine": true, 00:45:36.213 "iobuf_small_cache_size": 128, 00:45:36.213 "iobuf_large_cache_size": 16 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "bdev_raid_set_options", 00:45:36.213 "params": { 00:45:36.213 "process_window_size_kb": 1024, 00:45:36.213 "process_max_bandwidth_mb_sec": 0 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "bdev_iscsi_set_options", 00:45:36.213 "params": { 00:45:36.213 "timeout_sec": 30 00:45:36.213 } 00:45:36.213 }, 00:45:36.213 { 00:45:36.213 "method": "bdev_nvme_set_options", 00:45:36.213 "params": { 00:45:36.213 "action_on_timeout": "none", 00:45:36.213 "timeout_us": 0, 00:45:36.213 "timeout_admin_us": 0, 00:45:36.213 "keep_alive_timeout_ms": 10000, 00:45:36.213 "arbitration_burst": 0, 00:45:36.213 "low_priority_weight": 0, 00:45:36.213 "medium_priority_weight": 0, 00:45:36.214 "high_priority_weight": 0, 00:45:36.214 "nvme_adminq_poll_period_us": 10000, 00:45:36.214 "nvme_ioq_poll_period_us": 0, 00:45:36.214 "io_queue_requests": 512, 00:45:36.214 "delay_cmd_submit": true, 00:45:36.214 "transport_retry_count": 4, 00:45:36.214 "bdev_retry_count": 3, 00:45:36.214 "transport_ack_timeout": 0, 00:45:36.214 "ctrlr_loss_timeout_sec": 0, 00:45:36.214 "reconnect_delay_sec": 0, 00:45:36.214 "fast_io_fail_timeout_sec": 0, 00:45:36.214 "disable_auto_failback": false, 00:45:36.214 "generate_uuids": false, 00:45:36.214 "transport_tos": 0, 00:45:36.214 "nvme_error_stat": false, 00:45:36.214 "rdma_srq_size": 0, 00:45:36.214 "io_path_stat": false, 00:45:36.214 "allow_accel_sequence": false, 00:45:36.214 "rdma_max_cq_size": 0, 00:45:36.214 "rdma_cm_event_timeout_ms": 0, 00:45:36.214 "dhchap_digests": [ 00:45:36.214 "sha256", 00:45:36.214 "sha384", 00:45:36.214 "sha512" 00:45:36.214 ], 00:45:36.214 "dhchap_dhgroups": [ 00:45:36.214 "null", 00:45:36.214 "ffdhe2048", 00:45:36.214 "ffdhe3072", 00:45:36.214 "ffdhe4096", 00:45:36.214 "ffdhe6144", 00:45:36.214 "ffdhe8192" 00:45:36.214 ] 00:45:36.214 } 00:45:36.214 }, 00:45:36.214 { 00:45:36.214 "method": "bdev_nvme_attach_controller", 00:45:36.214 "params": { 00:45:36.214 "name": "nvme0", 00:45:36.214 "trtype": "TCP", 00:45:36.214 "adrfam": "IPv4", 00:45:36.214 "traddr": "127.0.0.1", 00:45:36.214 "trsvcid": "4420", 00:45:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:36.214 "prchk_reftag": false, 00:45:36.214 "prchk_guard": false, 00:45:36.214 "ctrlr_loss_timeout_sec": 0, 00:45:36.214 "reconnect_delay_sec": 0, 00:45:36.214 "fast_io_fail_timeout_sec": 0, 00:45:36.214 "psk": "key0", 00:45:36.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:36.214 "hdgst": false, 00:45:36.214 "ddgst": false, 00:45:36.214 "multipath": "multipath" 00:45:36.214 } 00:45:36.214 }, 00:45:36.214 { 00:45:36.214 "method": "bdev_nvme_set_hotplug", 00:45:36.214 "params": { 00:45:36.214 "period_us": 100000, 00:45:36.214 "enable": false 00:45:36.214 } 00:45:36.214 }, 00:45:36.214 { 00:45:36.214 "method": "bdev_wait_for_examine" 00:45:36.214 } 00:45:36.214 ] 00:45:36.214 }, 00:45:36.214 { 00:45:36.214 "subsystem": "nbd", 00:45:36.214 "config": [] 00:45:36.214 } 00:45:36.214 ] 00:45:36.214 }' 00:45:36.214 [2024-11-18 18:52:34.454196] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:36.214 [2024-11-18 18:52:34.454330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227214 ] 00:45:36.472 [2024-11-18 18:52:34.597831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.472 [2024-11-18 18:52:34.735118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:37.039 [2024-11-18 18:52:35.181237] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:37.296 18:52:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:37.296 18:52:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:37.297 18:52:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:37.297 18:52:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.297 18:52:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:37.554 18:52:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:37.554 18:52:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:37.554 18:52:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:37.554 18:52:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:37.554 18:52:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:37.554 18:52:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.554 18:52:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:37.812 18:52:35 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:37.812 18:52:35 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:37.812 18:52:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:37.812 18:52:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:37.812 18:52:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:37.812 18:52:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.812 18:52:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:38.069 18:52:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:38.069 18:52:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:38.069 18:52:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:38.069 18:52:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:38.327 18:52:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:38.327 18:52:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:38.327 18:52:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.li87crbjrw /tmp/tmp.8CrpGFS1B9 00:45:38.327 18:52:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3227214 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3227214 ']' 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3227214 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227214 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227214' 00:45:38.327 killing process with pid 3227214 00:45:38.327 18:52:36 keyring_file -- common/autotest_common.sh@973 -- # kill 3227214 00:45:38.327 Received shutdown signal, test time was about 1.000000 seconds 00:45:38.327 00:45:38.328 Latency(us) 00:45:38.328 [2024-11-18T17:52:36.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:38.328 [2024-11-18T17:52:36.665Z] =================================================================================================================== 00:45:38.328 [2024-11-18T17:52:36.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:38.328 18:52:36 keyring_file -- common/autotest_common.sh@978 -- # wait 3227214 00:45:39.262 18:52:37 keyring_file -- keyring/file.sh@21 -- # killprocess 3225386 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3225386 ']' 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3225386 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225386 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225386' 00:45:39.262 killing process with pid 3225386 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@973 -- # kill 3225386 00:45:39.262 18:52:37 keyring_file -- common/autotest_common.sh@978 -- # wait 3225386 00:45:41.791 00:45:41.791 real 0m20.266s 00:45:41.791 user 0m46.203s 00:45:41.791 sys 0m3.673s 00:45:41.791 18:52:39 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:41.791 18:52:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:41.791 ************************************ 00:45:41.791 END TEST keyring_file 00:45:41.791 ************************************ 00:45:41.791 18:52:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:41.791 18:52:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:41.791 18:52:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:41.791 18:52:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:41.791 18:52:39 -- common/autotest_common.sh@10 -- # set +x 00:45:41.791 ************************************ 00:45:41.791 START TEST keyring_linux 00:45:41.791 ************************************ 00:45:41.791 18:52:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:41.791 Joined session keyring: 558875720 00:45:41.791 * Looking for test storage... 00:45:41.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:41.791 18:52:39 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:41.791 18:52:39 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:45:41.791 18:52:39 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:41.791 18:52:39 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:41.791 18:52:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:41.792 18:52:39 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:41.792 18:52:39 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.792 --rc genhtml_branch_coverage=1 00:45:41.792 --rc genhtml_function_coverage=1 00:45:41.792 --rc genhtml_legend=1 00:45:41.792 --rc geninfo_all_blocks=1 00:45:41.792 --rc geninfo_unexecuted_blocks=1 00:45:41.792 00:45:41.792 ' 00:45:41.792 18:52:39 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.792 --rc genhtml_branch_coverage=1 00:45:41.792 --rc genhtml_function_coverage=1 00:45:41.792 --rc genhtml_legend=1 00:45:41.792 --rc geninfo_all_blocks=1 00:45:41.792 --rc geninfo_unexecuted_blocks=1 00:45:41.792 00:45:41.792 ' 00:45:41.792 18:52:39 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.792 --rc genhtml_branch_coverage=1 00:45:41.792 --rc genhtml_function_coverage=1 00:45:41.792 --rc genhtml_legend=1 00:45:41.792 --rc geninfo_all_blocks=1 00:45:41.792 --rc geninfo_unexecuted_blocks=1 00:45:41.792 00:45:41.792 ' 00:45:41.792 18:52:39 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.792 --rc genhtml_branch_coverage=1 00:45:41.792 --rc genhtml_function_coverage=1 00:45:41.792 --rc genhtml_legend=1 00:45:41.792 --rc geninfo_all_blocks=1 00:45:41.792 --rc geninfo_unexecuted_blocks=1 00:45:41.792 00:45:41.792 ' 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:41.792 18:52:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:41.792 18:52:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.792 18:52:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.792 18:52:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.792 18:52:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:41.792 18:52:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:41.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:41.792 /tmp/:spdk-test:key0 00:45:41.792 18:52:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:41.792 18:52:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:41.792 18:52:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:41.793 /tmp/:spdk-test:key1 00:45:41.793 18:52:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3227892 00:45:41.793 18:52:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:41.793 18:52:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3227892 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3227892 ']' 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:41.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:41.793 18:52:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:41.793 [2024-11-18 18:52:40.081404] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:41.793 [2024-11-18 18:52:40.081572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227892 ] 00:45:42.051 [2024-11-18 18:52:40.225120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:42.051 [2024-11-18 18:52:40.358077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.985 18:52:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:42.985 18:52:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:42.985 18:52:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:42.985 18:52:41 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.985 18:52:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:43.242 [2024-11-18 18:52:41.323453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:43.242 null0 00:45:43.242 [2024-11-18 18:52:41.355460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:43.242 [2024-11-18 18:52:41.356141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.242 18:52:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:43.242 589264816 00:45:43.242 18:52:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:43.242 620782688 00:45:43.242 18:52:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3228164 00:45:43.242 18:52:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3228164 /var/tmp/bperf.sock 00:45:43.242 18:52:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3228164 ']' 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:43.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:43.242 18:52:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:43.242 [2024-11-18 18:52:41.462939] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:43.242 [2024-11-18 18:52:41.463087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228164 ] 00:45:43.500 [2024-11-18 18:52:41.598535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.500 [2024-11-18 18:52:41.721575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:44.433 18:52:42 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:44.433 18:52:42 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:44.433 18:52:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:44.433 18:52:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:44.433 18:52:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:44.433 18:52:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:44.999 18:52:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:44.999 18:52:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:45.257 [2024-11-18 18:52:43.574940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:45.514 nvme0n1 00:45:45.514 18:52:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:45.514 18:52:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:45.514 18:52:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:45.514 18:52:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:45.514 18:52:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:45.514 18:52:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:45.772 18:52:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:45.772 18:52:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:45.772 18:52:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:45.772 18:52:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:45.772 18:52:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:45.772 18:52:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:45.772 18:52:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@25 -- # sn=589264816 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 589264816 == \5\8\9\2\6\4\8\1\6 ]] 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 589264816 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:46.030 18:52:44 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:46.030 Running I/O for 1 seconds... 00:45:47.403 6873.00 IOPS, 26.85 MiB/s 00:45:47.403 Latency(us) 00:45:47.403 [2024-11-18T17:52:45.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:47.403 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:47.403 nvme0n1 : 1.02 6859.98 26.80 0.00 0.00 18459.42 6359.42 25243.50 00:45:47.403 [2024-11-18T17:52:45.740Z] =================================================================================================================== 00:45:47.403 [2024-11-18T17:52:45.740Z] Total : 6859.98 26.80 0.00 0.00 18459.42 6359.42 25243.50 00:45:47.403 { 00:45:47.403 "results": [ 00:45:47.403 { 00:45:47.403 "job": "nvme0n1", 00:45:47.403 "core_mask": "0x2", 00:45:47.403 "workload": "randread", 00:45:47.403 "status": "finished", 00:45:47.403 "queue_depth": 128, 00:45:47.403 "io_size": 4096, 00:45:47.403 "runtime": 1.020702, 00:45:47.403 "iops": 6859.98459883492, 00:45:47.403 "mibps": 26.796814839198905, 00:45:47.403 "io_failed": 0, 00:45:47.403 "io_timeout": 0, 00:45:47.403 "avg_latency_us": 18459.421061918816, 00:45:47.403 "min_latency_us": 6359.419259259259, 00:45:47.403 "max_latency_us": 25243.496296296296 00:45:47.403 } 00:45:47.403 ], 00:45:47.403 "core_count": 1 00:45:47.403 } 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:47.403 18:52:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:47.403 18:52:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:47.403 18:52:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:47.660 18:52:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:47.660 18:52:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:47.660 18:52:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:47.660 18:52:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:47.661 18:52:45 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:47.661 18:52:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:47.918 [2024-11-18 18:52:46.207589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:47.918 [2024-11-18 18:52:46.208403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:47.918 [2024-11-18 18:52:46.209373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:47.918 [2024-11-18 18:52:46.210370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:47.918 [2024-11-18 18:52:46.210405] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:47.918 [2024-11-18 18:52:46.210430] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:47.919 [2024-11-18 18:52:46.210466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:47.919 request: 00:45:47.919 { 00:45:47.919 "name": "nvme0", 00:45:47.919 "trtype": "tcp", 00:45:47.919 "traddr": "127.0.0.1", 00:45:47.919 "adrfam": "ipv4", 00:45:47.919 "trsvcid": "4420", 00:45:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:47.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:47.919 "prchk_reftag": false, 00:45:47.919 "prchk_guard": false, 00:45:47.919 "hdgst": false, 00:45:47.919 "ddgst": false, 00:45:47.919 "psk": ":spdk-test:key1", 00:45:47.919 "allow_unrecognized_csi": false, 00:45:47.919 "method": "bdev_nvme_attach_controller", 00:45:47.919 "req_id": 1 00:45:47.919 } 00:45:47.919 Got JSON-RPC error response 00:45:47.919 response: 00:45:47.919 { 00:45:47.919 "code": -5, 00:45:47.919 "message": "Input/output error" 00:45:47.919 } 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@33 -- # sn=589264816 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 589264816 00:45:47.919 1 links removed 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@33 -- # sn=620782688 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 620782688 00:45:47.919 1 links removed 00:45:47.919 18:52:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3228164 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3228164 ']' 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3228164 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:47.919 18:52:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228164 00:45:48.177 18:52:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:48.177 18:52:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:48.177 18:52:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228164' 00:45:48.177 killing process with pid 3228164 00:45:48.177 18:52:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 3228164 00:45:48.177 Received shutdown signal, test time was about 1.000000 seconds 00:45:48.177 00:45:48.177 Latency(us) 00:45:48.177 [2024-11-18T17:52:46.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:48.177 [2024-11-18T17:52:46.514Z] =================================================================================================================== 00:45:48.177 [2024-11-18T17:52:46.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:48.177 18:52:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 3228164 00:45:49.110 18:52:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3227892 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3227892 ']' 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3227892 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227892 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227892' 00:45:49.110 killing process with pid 3227892 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 3227892 00:45:49.110 18:52:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 3227892 00:45:51.654 00:45:51.654 real 0m9.703s 00:45:51.654 user 0m16.784s 00:45:51.654 sys 0m1.927s 00:45:51.654 18:52:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:51.654 18:52:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:51.654 ************************************ 00:45:51.654 END TEST keyring_linux 00:45:51.654 ************************************ 00:45:51.654 18:52:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:51.654 18:52:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:51.654 18:52:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:51.654 18:52:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:51.654 18:52:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:51.654 18:52:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:51.654 18:52:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:51.654 18:52:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:51.654 18:52:49 -- common/autotest_common.sh@10 -- # set +x 00:45:51.654 18:52:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:51.654 18:52:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:51.654 18:52:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:51.654 18:52:49 -- common/autotest_common.sh@10 -- # set +x 00:45:53.554 INFO: APP EXITING 00:45:53.554 INFO: killing all VMs 00:45:53.554 INFO: killing vhost app 00:45:53.554 INFO: EXIT DONE 00:45:54.120 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:54.120 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:54.120 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:54.120 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:54.120 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:54.120 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:54.120 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:54.378 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:54.378 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:54.378 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:54.378 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:54.378 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:54.378 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:54.378 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:54.378 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:54.378 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:54.378 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:55.752 Cleaning 00:45:55.752 Removing: /var/run/dpdk/spdk0/config 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:55.752 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:55.752 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:55.752 Removing: /var/run/dpdk/spdk1/config 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:55.752 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:55.752 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:55.752 Removing: /var/run/dpdk/spdk2/config 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:55.752 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:55.753 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:55.753 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:55.753 Removing: /var/run/dpdk/spdk3/config 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:55.753 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:55.753 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:55.753 Removing: /var/run/dpdk/spdk4/config 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:55.753 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:55.753 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:55.753 Removing: /dev/shm/bdev_svc_trace.1 00:45:55.753 Removing: /dev/shm/nvmf_trace.0 00:45:55.753 Removing: /dev/shm/spdk_tgt_trace.pid2813964 00:45:55.753 Removing: /var/run/dpdk/spdk0 00:45:55.753 Removing: /var/run/dpdk/spdk1 00:45:55.753 Removing: /var/run/dpdk/spdk2 00:45:55.753 Removing: /var/run/dpdk/spdk3 00:45:55.753 Removing: /var/run/dpdk/spdk4 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2811073 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2812208 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2813964 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2814693 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2815646 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2816059 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2817041 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2817186 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2817736 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2819805 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2820995 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2821587 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2822185 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2822791 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2823264 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2823549 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2823702 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2824022 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2824395 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2827227 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2827669 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2828227 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2828363 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2829721 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2829860 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2831105 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2831245 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2831680 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2831938 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2832362 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2832506 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2833544 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2833706 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2834036 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2836674 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2839459 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2847133 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2847764 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2850421 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2850698 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2853739 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2857726 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2860051 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2867277 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2873047 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2874382 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2875177 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2886861 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2889535 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2947252 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2950796 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2954902 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2961143 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2990767 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2993959 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2995250 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2997214 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2997495 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2997774 00:45:55.753 Removing: /var/run/dpdk/spdk_pid2998163 00:45:56.012 Removing: /var/run/dpdk/spdk_pid2999002 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3000455 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3001853 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3002548 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3004431 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3005255 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3006079 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3008865 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3012549 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3012550 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3012551 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3015051 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3017399 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3020918 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3045023 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3048051 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3052702 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3054176 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3055915 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3057403 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3060467 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3063561 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3066209 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3070955 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3070968 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3074124 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3074263 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3074398 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3074791 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3074805 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3076005 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3077180 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3078457 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3079648 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3080839 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3082129 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3086696 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3087144 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3088469 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3089350 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3093392 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3095494 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3099312 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3102900 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3109647 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3115014 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3115017 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3128169 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3128832 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3129500 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3130158 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3131150 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3131773 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3132354 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3133013 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3135797 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3136184 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3140231 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3140418 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3143923 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3147226 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3154464 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3154876 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3157579 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3157789 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3160684 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3164723 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3167036 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3174217 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3179922 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3181734 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3182528 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3193365 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3195893 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3198028 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3203583 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3203592 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3206740 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3208267 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3209788 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3210758 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3212911 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3213865 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3219574 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3219959 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3220357 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3222244 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3222539 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3222937 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3225386 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3225531 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3227214 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3227892 00:45:56.012 Removing: /var/run/dpdk/spdk_pid3228164 00:45:56.012 Clean 00:45:56.271 18:52:54 -- common/autotest_common.sh@1453 -- # return 0 00:45:56.271 18:52:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:56.271 18:52:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:56.271 18:52:54 -- common/autotest_common.sh@10 -- # set +x 00:45:56.271 18:52:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:56.271 18:52:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:56.271 18:52:54 -- common/autotest_common.sh@10 -- # set +x 00:45:56.271 18:52:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:56.271 18:52:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:56.271 18:52:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:56.271 18:52:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:56.271 18:52:54 -- spdk/autotest.sh@398 -- # hostname 00:45:56.271 18:52:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:56.529 geninfo: WARNING: invalid characters removed from testname! 00:46:28.675 18:53:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:30.049 18:53:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:33.340 18:53:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:35.869 18:53:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:39.153 18:53:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:41.687 18:53:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:44.971 18:53:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:44.971 18:53:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:44.971 18:53:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:44.971 18:53:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:44.971 18:53:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:44.971 18:53:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:44.971 + [[ -n 2739843 ]] 00:46:44.971 + sudo kill 2739843 00:46:44.980 [Pipeline] } 00:46:44.995 [Pipeline] // stage 00:46:45.000 [Pipeline] } 00:46:45.018 [Pipeline] // timeout 00:46:45.024 [Pipeline] } 00:46:45.037 [Pipeline] // catchError 00:46:45.043 [Pipeline] } 00:46:45.058 [Pipeline] // wrap 00:46:45.064 [Pipeline] } 00:46:45.077 [Pipeline] // catchError 00:46:45.086 [Pipeline] stage 00:46:45.088 [Pipeline] { (Epilogue) 00:46:45.101 [Pipeline] catchError 00:46:45.103 [Pipeline] { 00:46:45.115 [Pipeline] echo 00:46:45.117 Cleanup processes 00:46:45.123 [Pipeline] sh 00:46:45.405 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.405 3241680 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.418 [Pipeline] sh 00:46:45.699 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.699 ++ grep -v 'sudo pgrep' 00:46:45.699 ++ awk '{print $1}' 00:46:45.699 + sudo kill -9 00:46:45.699 + true 00:46:45.710 [Pipeline] sh 00:46:45.992 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:58.203 [Pipeline] sh 00:46:58.486 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:58.486 Artifacts sizes are good 00:46:58.501 [Pipeline] archiveArtifacts 00:46:58.509 Archiving artifacts 00:46:58.662 [Pipeline] sh 00:46:58.974 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:58.989 [Pipeline] cleanWs 00:46:58.999 [WS-CLEANUP] Deleting project workspace... 00:46:58.999 [WS-CLEANUP] Deferred wipeout is used... 00:46:59.006 [WS-CLEANUP] done 00:46:59.008 [Pipeline] } 00:46:59.025 [Pipeline] // catchError 00:46:59.037 [Pipeline] sh 00:46:59.316 + logger -p user.info -t JENKINS-CI 00:46:59.325 [Pipeline] } 00:46:59.339 [Pipeline] // stage 00:46:59.344 [Pipeline] } 00:46:59.358 [Pipeline] // node 00:46:59.363 [Pipeline] End of Pipeline 00:46:59.401 Finished: SUCCESS